14th IEEE Integrated STEM Education Conference — 9 AM - 5 PM EDT, Saturday, March 9

Onsite Venue - McDonnell and Jadwin Halls, Princeton University, NJ - Virtual Attendees - Enter Zoom Room

Session Poster-01

Poster 01 — Poster Virtual

Conference
10:30 AM — 11:00 AM EST
Local
Mar 9 Sat, 10:30 AM — 11:00 AM EST

Detecting Food Allergies through Scratch Testing and Blood Tests

Shreya Dutt (MCMSNJ, USA)

5
Food allergies are on the rise and are becoming increasingly more prevalent. By 2019, about 6 million children in the U.S. had been reported to have at least one food allergy which amounts to 2 kids in every classroom. The cause of food allergies has long been discussed, leading to recent research determining that though genetics could be a contributing factor to food allergies, about two thirds of children with a food allergy do not have a parent with one. Because food allergies have greatly increased in the last generation, it has led researchers to believe that perhaps there is a correlation between food allergies and climate change that has also occurred in the past few decades. In order to determine the cause of food allergies and find a way to treat or prevent it, it first has to be diagnosed. There are two main ways to detect food allergies: Scratch testing and Allergen-Specific Immunoglobulin E (IgE) blood testing. A scratch test is a simple skin test where drops of specific allergens are placed on the skin and the skin is lightly scratched to expose the person to the said allergen. The person allergic to these will develop a bump at the skin site within 20 minutes. The bumps are traced and visually compared to determine the level of allergy. The Immunoglobulin E (IgE) blood test measures the level of IgE that is associated with allergic reactions in the blood. This helps detect an allergy to a particular allergen that is being tested for. When someone is exposed to an allergen, such as peanuts, dairy, or treenuts, etc., the person's body may perceive this as an antigen and produce a particular IgE that binds to specialized mast cells in the person's basophils, skin, GI tract, or respiratory system. The next time the body is exposed to the particular allergen, the IgE antibodies trigger the mast cells to release histamine, which cause allergic reactions or anaphylaxis. Specific IgE level test can measure the level of response to different allergens in a person. Though there is room for improvement in accuracy, both forms of testing used together are effective in diagnosing allergies and determining levels of allergies for the top allergens and many more. These methods of testing can assist in facilitating more research in food allergy causes and treatment, creating a brighter future for next generations of children.
Speaker
Speaker biography is not available.

The Feasibility of Coffee Grounds and Coconut-based Antimicrobial Exfoliant

Diego Lorenzo C. Donato (Philippines); Juris Roi D. Orpilla, Adrian Gabriel H. De Guzman and Matthew Jonathan T. Hallasgo (La Salle Green Hills, Philippines)

0
From the first quarter of 2022 to 2023, both the Philippine coconut and coffee bean industry have seen an increase in production at 1.88% and 1.3% respectively, due to the rapid modernization of the agricultural sector in our country; however with this growth led an increase in agricultural byproduct production which have caused negative adverse effects towards human, the environment, and socio-economic progress of the Philippines. In addressing the issue, the research paper studies the feasibility of producing an antimicrobial exfoliant out of agricultural byproducts, namely: Coconut Husk, Coconut Shell, Coconut Oil, and Coffee Grounds with Xanthan Gum as an additive. Given the importance of observing the Sustainable Development Goals (SDG) of the United Nations, our product will ensure its production process and benefits align with said goals specifically, 3, 6, 8, 11, 12, 13, 14 and 17. First, the production process is environmentally friendly, which makes it safe for the bodies of water and the life below it. Additionally, the benefits include the improved well-being and health of skin; the optimised use of agricultural wastes in the country; the usage of biodegradable materials for the product; decreased amounts of agricultural byproduct in landfills; improved air quality, and the improved economy for the agricultural sector in developing countries such as the Philippines.For this study, the independent variable will be the Coffee Grounds and Coconut-Based Antimicrobial Exfoliant while the dependent variable will be regarding its effectivity and feasibility which will be tested through four experiments, "Agar Disk Diffusion" for its antibacterial properties, "Modified Tape Stripping" for its abrasion properties, "pH Test" to indicate if it is safe for human use, and "Spreadability Test" to determine its ease of application on skin. The results of the study showed that an exfoliant made from agricultural waste products was feasible and comparable to exfoliants available in the market, however it is recommended that future researchers advance the study on the use of natural products, most specially agricultural byproducts, as ingredients in making medicinal or cosmetic products more efficient.
Speaker
Speaker biography is not available.

Deep Learning Approach to Early Detection of Rheumatoid arthritis

Saket Pathak ( & Silver Creek High School, USA)

1
Rheumatoid arthritis (RA) is a chronic inflammatory disorder affecting areas such as the hands and feet. According to MedicalNewsToday, roughly 1.3 million people in the US have RA, representing 0.6 to 1% of the population. Artificial intelligence (AI) is the ability of machines to perform tasks that typically require human intelligence and is becoming more widespread in areas such as healthcare. The detection of the more common osteoarthritis has been performed using AI before, and RA detections are starting to emerge, too. However, these detection methods use X-rays and protein scans, which take time and money. Since arthritis is a disorder that happens in the joints, automating its detection using images could be done in a new, revolutionary way. To get this, two image datasets were used, the first being healthy hands with no arthritis symptoms. The second data set would contain images of nodules which are bumps on the hand for RA symptoms. The model would be created using Jupyter Notebook, TensorFlow, Keras, and Python 3.9, where the data would then go through preprocessing, scaling, and splitting for faster training. The deep learning model, a convolutional neural network, is used along with the model.fit for training. The accuracy yielded 99.48%; overall, it could classify between the two data sets. The conclusion is that classifying RA from just a scan of someone's hand could, in the future, allow for a faster diagnosis of any arthritis when it is perfected.
Speaker
Speaker biography is not available.

Exploring cybercrimes on Roblox

Maya Patwardhan (Germantown Academy, USA)

0
Roblox is an online universe in which users can create and choose from many different games to play. In 2023, the platform averaged 214 million users per month and generated USD 7 million per day. Some popular games on Roblox include Adopt Me!, Mega Easy Obby, Arsenal, Brookhaven RP, and Welcome to Bloxburg. However, is this online platform safe to use? What types of cybercrimes occur through Roblox? To find information about these crimes, I used the Google Search Engine to collect articles. I searched using an initial keyword of "roblox crimes". This resulted in numerous articles containing data about the various crimes committed, which I used again as keywords in more searchers. Altogether, I found 16 articles on news, security, and magazine websites. I found the following types of crimes happening on Roblox: hacking, beaming, data breaches, ransom, malware, scams, spreading hate in game, and in-person violence. User account can be hacked; in one case, cybercriminals hacked multiple accounts and changed the profile information to say "Ask your parents to vote for Trump". Beaming is when an account is hacked and valuable in-game, Robux, or limited-edition items are stolen and then sold. Data breaches are when sensitive information about individuals is stolen. This information contains details like names, usernames, phone numbers, email addresses, IP addresses, home addresses, and date of birth. Cybercriminals can then hold this information hostage and ask for a ransom to prevent it from being leaked online. Users can be tricked into downloading malware (malicious software) on their devices; 9.5% of malicious files are spread due to Roblox. Scams are also prevalent on Roblox. One example is when fake websites claim to give users free Robux and steal information. In another case, users paid small amounts (approximately 60 cents) for nonexistent prizes. Roblox is also used to recreate famous terrorist attacks that users could experience, which results the spread of hate. Sometimes, predators would pose as users and become friends with their targets (children) in game. They would later encourage these victims to meet in person, causing in-person violence, such as assaults. Collectively, these crimes can cause depression in kids, have user information leaked online, con users out of their money, and infect their devices. To make sure you or someone you know is safe on these online games, it is necessary to educate yourself on all the crimes that could happen when playing in and outside of the game.
Speaker
Speaker biography is not available.

Enter Zoom
Session Poster-02

Poster 02 — Poster Virtual

Conference
10:30 AM — 11:00 AM EST
Local
Mar 9 Sat, 10:30 AM — 11:00 AM EST

Development of a Coconut Coir Diaper

Gabriel L. Legaspi, Marco Juhann G. Ortega, Gabriel Antonio S. Prospero, Freya Yggdrasil Soltura and Samantha Q. Tachado (La Salle Green Hills, Philippines)

3
Extensive research revealed that in 2022, the Philippines was the fourth top waste generator in Southeast Asia, and considered to be a top contributor to ocean pollution. Healthcare waste alone, generated from June 2020 to April 2022, weighed around 1,400 metric tons every day, according to the Environmental Management Bureau (EMB). The amount of garbage produced overall in the country increases due to its rapid population growth and urbanization. Due to the lack of resources, the government is not able to execute efficient waste management, which, in turn, leads to environmental and health problems. Non-biodegradable diapers contribute greatly to this healthcare waste and thus, create a need to explore biodegradable materials such as organic natural fibers like coconut coir. This research aims to assess whether the diaper satisfies the criteria set by the researchers regarding durability and absorbency. Compared to previous studies, this paper will utilize coconut coir fiber as the main material for diapers. The diaper underwent multiple treatments and tests - including intensive chemical sterilization, Gravimetric test and Rate of Absorption test. The conducted research reveals that the prototype can closely replicate the same qualities as a commercial diaper in terms of absorbency and durability. Using a t-test, the statistical analysis shows no significant difference between the prototype and commercial diaper. In addition, producing coconut coir diapers is a cost-effective approach as coconut coir is readily available in the country. Not only does this coconut coir diaper pave the way for repurposing agricultural waste potentially alleviating waste-related issues; but it also addresses the severe waste management issues that communities are currently facing.
Speaker
Speaker biography is not available.

Enhancing the Desiccation Tolerance of Arabidopsis thaliana with Proteins from Ramazzotius varieornatus

Deven R Butani (USA)

0
Drought has posed a great threat to crops around the world. The decrease in water availability within soils greatly declines crop yield and productivity, therefore negatively affecting the food crop supply. This has caused major problems for the agricultural industries and millions who rely on these crops for food sources, especially with climate change exacerbating droughts, making these dry periods longer, more frequent, and more severe. Tardigrades are microscopic organisms renowned for their remarkable resistance to various extreme conditions. One of their most remarkable abilities is their ability to survive through desiccation or drying. For protection against these water scarcities, the tardigrades utilize specific types of intrinsically disordered proteins specific to their species, also known as Tardigrade-Disordered Proteins (TDPs). One of these proteins is the Cytoplasmic/Cytosolic Abundant Heat Soluble Proteins (CAHS), located within the cytoplasm and primarily protect a tardigrade's cells from desiccation. This research aims to apply CAHS protein-expressing genes to thale cress for them to survive and thrive post-desiccation, as tardigrades have proved these mechanisms to be crucial to their survival and well-being after such an event. If the plants show increased health and yield after dry periods, genetically engineered plants with tardigrade proteins can prove to be extremely beneficial to the productivity of crops within the agricultural industry.
Speaker
Speaker biography is not available.

Balloon Car

Arden Upadya (Morristown Beard School, USA)

0
I created a car that moves by itself after air is blown into the balloon which is attached to the car to demonstrate certain aspects of physics. The car consists of a Gatorade bottle as the body, four Gatorade bottle caps as the wheels, three straws, 2 skewers, and a balloon. First, two straws are attached to the bottom of the bottle and the two skewers are put through the straws. Next, a hole is made in the bottle caps and they are put on the ends of the skewers, but they have to be able to move freely. Then, a hole is made on the top of the bottle and a straw is put in that hole and pointed to the back of the bottle. Lastly, a balloon is attached to the straw going through the hole and it is secured by a rubber band. After the balloon is inflated, the car moves forward until the balloon is deflated and sometimes a little longer after there is no air inside. The car is able to move because of the air inside the balloon which propels it forward. The energy stored in the balloon once it is inflated is potential energy, and it is converted to kinetic energy once the car starts moving. Therefore, energy is neither created nor destroyed; it is converted into different forms. This experiment also relates to Newton's Laws of Motion. Newton's First Law of Motion is displayed here as the car, which is stationary, does not move until it is acted on by the air. Similarly, when it is in motion, it does not stop until the air runs out and it is acted on by friction which makes it come to a halt. Newton's Second Law of Motion is seen by the amount of force or air that is put in which results in a different amount of acceleration and total distance. Newton's Third Law of Motion is shown once the action (blowing up the balloon) leads to an equal and opposite reaction (the balloon deflating and moving the car forward). These important principles of physics can be observed through the motion of a balloon powered car in the form of a fun science project!
Speaker
Speaker biography is not available.

Motion Planning Control of a Qbot2 – Using a Neural Network Controller

Saami Ali (Cold Spring Harbor High School, USA)

0
This project investigates the trajectory tracking motion control problem for a QBOT 2, an autonomous wheeled mobile robot. The robot will be operating as a part of wireless control system, where the control signal is transmitted to the robot wirelessly. In a wireless control system, the perturbations caused by the wireless channel can be instrumental in interfering with the feedback signal thus causing errors in the system tracking response. In this work the specific focus is on the effect of perturbations caused by the uncertain time-varying delays that are inherent in wireless communication links. To that effect, an attempt to model the delays in the feedback signal by determining the changes they bring on the closed-loop behavior, will be made. Finally, a control methodology to eliminate the tracking error caused by these delays, is developed. The tracking control methods that are generally used in wheeled mobile robots do not compensate for these uncertainties. In this work, an adaptive neural network control methodology is proposed for the robot. The approach will combine a neural network-based kinematic controller and a model reference adaptive control. The kinematic controller parameters will be updated online using artificial neural networks to force the tracking error of the robot to converge to zero. The goal is to provide both simulation and hardware implementation to illustrate the convergence of the proposed control scheme.
Speaker
Speaker biography is not available.

Enter Zoom
Session Poster-03

Poster 03 — Poster Virtual

Conference
10:30 AM — 11:00 AM EST
Local
Mar 9 Sat, 10:30 AM — 11:00 AM EST

Pi Song: Discover the Harmony of Numbers and Notes

Julia Lu (Pioneer Valley Chinese Immersion Charter School, USA)

1
"Pi Song" is a project that embodies an interdisciplinary endeavor, uniting the subjects of Science, Technology, Engineering, Arts, and Mathematics (STEAM) to convert the digits of Pi into a harmonious auditory experience. The project's main innovation is that it highlights and transforms the precision of mathematics into a melodious experience, thereby illustrating the intrinsic beauty of mathematical concepts through musical expression. From a scientific perspective, we construct a unique musical instrument from Lego components integrated with an ultrasonic sensor. This ultrasonic sensor measures distance and then plays out musical notes ranging from A to G# (A, B, C#, D, E, F#, G#), allowing us to play various melodies. This functionality helps determine distances and "see" where objects are, thus translating numerical distances into specific musical notes, with distances segmented into seven distinct ranges corresponding to notes A through G#. From an engineering perspective, we utilized LEGO NXT at the core and have engineered an instrument out of Legos that plays music; every note is fine-tuned to ensure the mathematical precision of Pi lies in every note played. Technologically, a Python script was developed to transform the 100 digits of Pi into a sequence of musical notes. This digital transformation turns raw numbers into a score for the senses. We then used Flat.io to turn the notes translated by Python into a music score with actual note values. Mathematically, the challenge was to assign musical notes to the digits of Pi, which was addressed by adapting base 10 digits into base 7 to accommodate all possible values. This approach not only solved the issue of representing digits beyond G (the seventh note) but also introduced a novel method of encoding numbers into music. Lastly, from a musical perspective, the Pi Song leverages different types of notes in rhythm with different values. In our music piece, we used quarter notes (worth 1 beat) and eighth notes (worth ½ a beat) because we wrote base 10 numbers as base 7 numbers, which resulted in 2 digits. Each digit represented one number in Pi, so we made the two-digit eighth notes worth 1 beat together. Every other note would be a quarter note, each worth one beat. This methodological choice ensured the musical piece's adherence to the temporal structure, with every note imbued with the essence of Pi. The PiSong project ends with a performance that synergistically combines a classical violin with the custom-built Lego instrument, offering a multi-sensory experience of Pi through music. This innovative project not only shows the creative fusion of STEAM disciplines but also serves as an experiment to explore the educational and aesthetic potential of translating mathematical phenomena into the universal language of music.
Speaker
Speaker biography is not available.

Pronunciation correction service for individuals with hearing impairment: noncontact connection with volunteers

JungHyun Clair Park (Chadwick International, Korea (South))

2
Pronunciation plays a crucial role in shaping first impressions during initial conversations, and unclear pronunciation is often misconstrued to be closely linked to intelligence. Consequently, individuals with hearing impairments who struggle with pronunciation are susceptible to such misunderstandings. This paper aims to address this issue by describing a prototype web service that provides remote assistance for pronunciation correction among individuals with hearing impairments. The service allows individuals with hearing impairments to upload recorded audio files, which are then reviewed by non-hearing-impaired volunteers. Subsequently, feedback is provided by comparing the submitted pronunciation with standard pronunciation. Prior research has explored various methods to address similar issues, including mimicking mouth movements and automatic recognition using AI. This study analyzed these programs, considering the characteristics of the Korean language, and applied the most suitable platform and technology to create a service tailored for individuals with hearing impairments. The web service emphasizes accessibility, enabling individuals to receive pronunciation correction assistance without constraints related to time, resources, or location. The development process involved utilizing Figma for web design and coding, forming the primary technologies for the web user interface.
Speaker
Speaker biography is not available.

Block4py: make logic with blocks and then do text coding!

Christina Cho (Phillips Academy Andover, USA); Seunghoon Ryu (Seoul International School, Korea (South)); Wonjae Choi (Chadwick International School, Korea (South))

1
In many schools in Korea, students learn block coding, such as Scratch, before learning text coding. When initially learning logic, block coding without the burden of syntax is considered the most optimal method. However, when later learning text coding, the focus shifts to memorizing syntax, and the skills acquired from block coding are not effectively utilized in text coding learning. The main reason for this is that text coding involves dealing with data like numbers or characters, rather than moving sprites as in block coding. Therefore, a more effective approach to learning text coding could involve providing content in block coding that can handle numbers or characters, creating logic for problem-solving in block coding first, and then learning Python syntax corresponding to the blocks. This approach could make learning text coding easier and more enjoyable for many students. In line with this approach, a website for learning Python has been created (https://block4py.org). The site presents problems, allows users to initially create logic with block coding, explains the corresponding Python syntax for the blocks, and guides users to perform text coding in Python. The website is currently in the minimum viable product stage, and feedback is being gathered from friends and other students. Based on this feedback, the plan is to address any issues and publish the site as an easy-to-learn platform for everyone.
Speaker
Speaker biography is not available.

White Line Detection System for Safe Crosswalk Pedestrian Movement of Visually Impaired Individuals

Joonwoo Bae (Seoul International School, Korea (South))

0
This research explores the development of an assistive device using smart glasses to enhance the mobility of visually impaired individuals while walking and crossing roads independently. Leveraging the camera and sensor functions embedded in smart glasses, a computer coding system was devised to aid the visually impaired in crossing roads accurately. The system utilizes the smart glass's camera to capture images every 1-2 seconds, transmitting them to a smartphone. The smartphone, equipped with a YOLO tiny model, identifies the white line on the crosswalk floor, triggering a voice alarm on the smart glass. This technology effectively alerts visually impaired individuals if they deviate from the correct path. The approach involves collecting and training the system with images of white lines to enhance its accuracy in line detection. The resulting system offers real-time guidance for visually impaired individuals, significantly improving their ability to navigate road crossings. Future endeavors include the development of a smartphone app incorporating the white line detection algorithm to further assist individuals with visual impairments.
Speaker
Speaker biography is not available.

Enter Zoom
Session Poster-04

Poster 04 — Poster Virtual

Conference
10:30 AM — 11:00 AM EST
Local
Mar 9 Sat, 10:30 AM — 11:00 AM EST

Beware the Hype around Information Technology!

Hamza Shoufan (Amity International School Abu Dhabi, United Arab Emirates)

0
Today, living in the Information Age, we use IT for almost everything in our lives. We use many versions of it, each designed for a specific function. We use social media to wirelessly communicate with people on the other side of the globe, apps like Word and PowerPoint to create content, and CAD and CAM technologies to design and manufacture. However, many technologies initially promise great success but never mature. So, how should we behave when a new technology comes out? The Hype Cycle is a good starting point. Gartner's Hype Cycle for Emerging Technologies has five stages. The first one is the ‘Innovation Trigger,' when people are triggered and excited about the new tech release. The second one is the ‘Peak of Inflated Expectations,' when people reach the peak of their expectations for the new technology. Thirdly, we have the ‘Trough of Disillusionment,' when the technology starts to find challenges and failures and its public interest declines. The ‘Slope of Enlightenment' is when people start to realise the technology's real potential and have actual, realistic expectations. Finally, the ‘Plateau of Productivity' is when the technology becomes mainstream and widely accepted and used by organisations and businesses. Here, trending technologies in the world currently, with a main example being generative AI are at the pinnacle of the second stage: the ‘Peak of Inflated Expectations.' And the big secret lies in the name of the second stage: the word inflated. Think about people's thoughts on ChatGPT right now. You would probably expect to hear things like: ‘it will change the world completely,' and ‘it will become essential for survival.' Upon thinking realistically, one who says these things would realise these expectations and thoughts are exaggerated, or, as mentioned in the name, inflated. Inflated expectations are mostly supported by enthusiasm from the media and initial users. The more people using the technology, the more insight they get into its shortages, which makes them lower their expectations and probably reduce usage or quit entirely. This is a big blow to smaller developers due to less revenue, but bigger developers such as OpenAI are not affected by such issues, as they can pay more money to further develop and integrate more features into their IT systems. This interaction between users' experience, expectations, and usage behaviour on one hand, and the developers' investment, development, and optimisations on the other, help improve the technology and raise people's expectations to a reasonable level again, allowing the developers to guide it to the ‘Plateau of Productivity' phase. In conclusion, we should not set too high expectations for new IT technologies just because of media and initial users' reviews. One should realistically and fairly rate these technologies, as they probably would not meet the high standards wanted by the people who are affected by the hype. As students, we should not expect that ChatGPT can solve every homework assignment that we have, and above all, we should not expect that it is there to help out with such assignments.
Speaker
Speaker biography is not available.

Predicting Grants for Hurricane Affected Homeowners Using Machine Learning Methods

Sumukh Venkatesh (USA)

0
In recent years, the escalating frequency and intensity of hurricanes have become a pressing concern due to the impacts of climate change. While homeowners of all walks of life have been affected by these increased damages, minority and low-income homeowners bear a disproportionate amount of the damage. Due to extensive hurricane damage, homeowners often receive grants aimed at assisting in the recovery and rebuilding process. These grants can encompass compensation, additional support for lower-income homeowners, elevation funds, and provisions for individual mitigation measures. Leveraging individual-level records sourced from the Louisiana Division of Administration via ProPublica, the research aims to predict the total amount in grants that homeowners receive and looks into the variables that have the greatest impact on the model, isolating those that can indicate possible bias in the distribution of aid. I believe that the machine learning techniques can predict the grant allocation to make the system more practical and effective for homeowners. The research used the following algorithms: XGBoost, Random Forest, Support Vector Machine, K-Nearest Neighbors and Logistic Regression. The Random Forest algorithm yielded the greatest accuracy, with an R-squared value of 0.893 for the final amount of grants received. By examining data and applying machine learning models, this study enhances understanding of post-disaster grant distribution, aiding decision-making for disaster relief organizations and policymakers. Furthermore, this research allows homeowners to predict if they will be able to meet their housing needs due to hurricane damage and exposes possible inequities in the grant allocation process.
Speaker
Speaker biography is not available.

Enhancing In-Cabin Monitoring Performance using Unity Eyes Generated Data

Raymond R Kim (Korea International School, Korea (South))

1
Research in autonomous driving has been gaining increasingly more attention, since the introduction of electric vehicles. Autonomous vehicles are required to conform to the levels set by the Society of Automotive Engineers (SAE), which made driver monitoring a legal requirement. In the currently allowed level, the car must be in the park gear to access its infotainment system, but to advance to the next level of autonomy, where the gear state is not monitored, the driver's state must be monitored. Facial scanning is the first step in determining the driver's current state, especially for drowsiness. However, it is difficult to acquire the data required as using such data would be an invasion of privacy. This paper aims to overcome the challenge of acquiring training data by generating the data with Unity Eyes, which would enable enhanced performance. A model with a ResNet50 backbone achieved a 66.0% accuracy when trained with a limited real dataset, whereas the same model trained with generated Unity Eyes data achieved 85.3% accuracy. Our ablation study showed that the use of Unity Eyes data is more effective than known pre-trained models as well. This study demonstrates the effectiveness of generated data in situations where large-scale data is impossible and suggests potential future applications in a variety of studies.
Speaker
Speaker biography is not available.

Using the Swin-Transformer for Real & Fake Data Recognition in PC-Model

Jiyoon Park (Branksome Hall Asia, Korea (South))

0
Recently, due to the rapid development of generative AI technologies, the use of AI-generated images has increased significantly, making the distinction between real and fake images crucial. Generative images may be used in various ways such as data training and fast image generation, but a potential for misuse, such as in Deep fake or spreading false information, still exists. This study explores a novel model using the architecture of Swin-Transformer to distinguish between fake and real images generated based on CNN (Convolutional Neural Networks) and GAN (Generative Adversarial Networks). The Swin-Transformer, a successor model of Vision in Transformer (ViT), applies the structure of the Transformer, which has shown outstanding performance in natural language processing, to the field of images and demonstrates excellent pixel-level segmentation performance. Real and fake images require detailed pixel-level analysis, in which the Swin-Transformer exhibits higher accuracy. Improving the performance of distinguishing between real and fake images is expected to set limits on indiscreet image generation, bringing further effects such as preventing the indiscriminate use of AI images through program-based discrimination/legal sanctions.
Speaker
Speaker biography is not available.

Enter Zoom
Session Poster-17

Poster 17 — Poster On-site

Conference
10:30 AM — 3:15 PM EST
Local
Mar 9 Sat, 10:30 AM — 3:15 PM EST

Improving Sensing and Data Collection of Research for Lucid Dreaming

Alessandra V Manganaro (Winchester High School, Winchester, MA, USA)

0
Lucid dreams occur when the subject is aware that they're dreaming while still asleep. It is a state of REM sleep, typically characterized by heightened activity in the frontoparietal region in the brain (associated with self-reflection and memory) comparable to during wakefulness. Growing attention, particularly in neuroscience, has been recently paid to this topic because of its links to the therapy of some neurological disorders, as well as for its potential to unlock or enhance cognitive and creative abilities. Yet, lucid dreaming remains relatively understudied due to the difficulty of collecting adequate data from subjects in a lab setting. These difficulties include the challenges of forcing subjects to reliably induce lucid dreams, disruption by unfamiliar surroundings, and the limitations due to the lack of individuals who could be considered ‘proficient enough' in the skill to have it accurately studied. Lucid dreams can be induced using various cognitive exercises usually after disruption of the REM stage (like waking up some hours after going to bed), taking certain drugs like galantamine, or triggered by specialized devices. Devices, like the Remee Lucid Dream Mask, have been created to help subjects achieve lucidity using visual and auditory patterns associated with being asleep in order to allow the user to recognize this during the REM phase. However, these products are expensive and have been shown to be overall ineffective, disrupting sleep more than achieving the goal. With the recent popularity of wearable life-sign monitoring devices, mostly aimed at physical fitness, non-invasive wearable brain monitoring devices are also being commercialized. For instance, the startup Neurable created a special headphone, inclusive of electroencephalogram (EEG) sensors with accuracy comparable to clinical-use equipment, with the intent to play selected music based on the user's measured EEG state to promote and boost concentration. This poster aims to give an introductory overview of recent reputable peer-reviewed results on the topic of lucid dreaming and to point to some available wearable brain-sensing devices and their characteristics. It is conceivable to develop some mix of the prior mentioned precedents to create a more dependable device that aids subjects in achieving lucidity while allowing equipment like those related to polysomnography or electrode bands and the knowledge in their use to be accessible for subjects to use in the comfort of their own home. This could aid academic researchers in the oneirological field by enabling access to a larger and richer volume of data, in and outside of the clinical environment. Greater understanding of lucid dreaming can help promote future projects including gaining a better grasp of the conscious, the role and cognitive contributions of dreams in waking life, and the ability to better aid patients who have gone unresponsive after experiencing brain injuries.
Speaker
Speaker biography is not available.

Using Prompt Engineering to Enhance STEM Education

Max Z Li (The Pingry School, USA)

1
With the advent of large language models (LLMs), such as ChatGPT, Gemini and LLaMA, AI will forever change how education works. Many have been quick to point out the potential of using these language models for use in academic dishonesty, which is a tangible problem. However, there is large potential for legitimate use in education. In a review article published in European Journal of Education in 2023, Zhang and Tur concluded that "ChatGPT has the potential to revolutionize K-12 education through the provision of personalized learning opportunities, enhance learner motivation and involvement". Complex topics in STEM can be difficult for anyone to understand. AI enabled personalized and interactive learning can help students get interested in STEM and learn at their own pace and capability. These models can be used as educational aid as it provides the unique capability of being able to respond to questions using natural language, thus making material much easier to digest step by step for a K-12 student. However, there is a gap between the student and the LLM. The prompts given to LLM need to be well designed in order to be effectively utilized for education. To use LLMs more appropriately for educational purposes, we propose a tool to fully utilize the educational potential of LLMs and reduce usage for academic dishonesty. The tool would have a student register by giving the grade that they're in as well as any topics they'd like to learn more about. Using prompt engineering techniques, the tool can prompt LLMs to produce educational content such as AI-generated quizzes and overviews, as well as simplifying complex topics further to aid in understanding. For example, if I had trouble with the Pythagorean theorem, the tool would generate a well-designed prompt for the LLM, such as "I am a 9th grade student learning the Pythagorean theorem and you are my teacher. Give me an overview of the topic as well as a practice quiz ..." With detailed prompts, a LLM can provide the necessary resources and explanations that a student would need to learn a topic effectively. Due to the nature of these models, the students could also easily ask follow-up questions about a topic to further understand it. The tool could also add more guidance into the prompt, such as forbidding the LLM from directly providing answers to homework/test problems. The tool can also search and provide examples and figures from other sources. The AI-enabled tool, effectively a virtual mentor, could help propel STEM education further and make STEM more interesting to student as it could help explain complex topics in a way that students understand. By using LLMs in education, we can help that students understand a topic through interactive practice instead of just memorizing facts and putting them on a sheet of paper. We will demo the tool and results with our poster.
Speaker
Speaker biography is not available.

Comparing Single-Cell Modality Prediction Performance Across Different Machine Learning Models

Dabin A Chae (Manhattan High School, USA)

0
Every cell of an organism contains the same genetic information, yet each cell is differentiated during development by a process known as gene expression. Genes are expressed to form specific cell types, each with unique traits, such as skin and nerve cells. Gene expression begins when mRNA is produced (transcribed) from open accessible areas of DNA strands. This new strand of mRNA is then translated into various proteins, which perform many functions within the cell. However, these processes are interconnected; the level of proteins regulates gene production and expression through post-translational modifications, which in turn can inhibit the opening of DNA for transcription and reduce the number of mRNA strands created. Today's machine learning techniques aim to understand the flow of information from DNA to RNA to protein in this regulatory cycle, which can provide insight into the origin of diseases. Yet, most measurements of cellular systems consist of a heterogeneous population of cell types. For example, a tumor sample taken from a patient may contain cancerous cells in addition to skin cells, benign cells, and other non-important cells. Analyzing these samples risks generalizing and masking the significance of individual cells. Single-cell datasets are used to understand the specific genomic information regarding the modalities of each cell type. However, collecting such data is resource-intensive, and cells can only be measured once, leading to sparse and noisy datasets. In addition, the modalities - DNA, RNA, and protein - are represented differently from each other, meaning we cannot simply merge them to create one standardized dataset. Relating the modalities to each other can help scientists picture the regulatory cycle of gene expression, but requires more data or a model that can accurately predict one modality from another. In this study, we create and test several predictive model architectures that predict surface protein levels from gene expression. Each model architecture contains relatively fewer parameters compared to those found in the Kaggle and OpenProblems competitions to determine which type of model would perform best without regard to hyperparameter tuning, number of layers, learning rates, etc. We employ CITEseq data, a method that simultaneously measures protein and mRNA expression, taken from three healthy donors across 7 different cell types containing 22,500 different gene expression levels. Machine learning models were trained on the gene expression levels of two donors to predict the protein levels of the third donor and we evaluated them in terms of Mean Square Error (MSE). Out of the six models tested, the Lasso and Neural Network models had the best prediction performance, as their MSEs are 3.15999 and 3.13334 respectively. Compared to LightGBM (MSE ≈ 3.26653) and Attention-based (MSE ≈ 4.76837) models, these are relatively simple models that are widely used in regression tasks that do not need a lot of training data. The results indicate the potential of utilizing nonparametric approaches to overcome the sparsity of single-cell datasets and uncover the underlying biological characteristics of converting genotypes to phenotypes.
Speaker
Speaker biography is not available.

IBM Platform's Role in Resolving Adaptability Issues in Online Education Through AI Machine Learning

Jingxi Wang (Amer, USA)

0
According to Oxford College, online learning has increased almost 900% from when it was first introduced in 2000. In recent years, it has transformed into not only a trustworthy way to receive traditional schooling but also, additional courses outside of school. Despite this increase, the effectiveness of online education against traditional in-person instruction remains a critical issue. It is essential to continue improving online education systems so that the growing reliance on online educational platforms is well-placed. My hypothesis suggests that AI/Machine Learning techniques can be used to highlight the shortcomings of online learning, showing possible ways to improve online learning to match traditional brick and mortar schools. This study utilized data from a Kaggle repository incorporating 13 features: subject region of residence, age of subject, time spent on online class per day, medium for online class, time spent on self-study, time spent on fitness, hours slept every night, time spent on social media, preferred social media platform, time spent on TV, number of meals per day, change in your weight, health issue, activities to relieve stress, aspect most missed, time utilized, and connection to family to investigate each student's satisfaction with online schooling. The research involved 1182 students of different age groups from schools across the Delhi National Capital Region, utilizing the IBM platform to deploy a variety of algorithms for the creation of predictive models. Random forest classifier, logistic regression, and decision tree classifier with and without enhancements were employed. They achieved moderate accuracy levels, above 50 percent. Additionally, each algorithm highlighted feature significance: subject age (100%), aspect most missed outside of online education (98%), time spent in online classes (73%), activities done to relieve stress (65%), and time spent self-studying (46%) being identified as the most crucial in the random forest classifier model. Extensive research was conducted on the notable features, and strong correlations were identified among them, demonstrating high accuracy in predicting satisfaction with online education across all algorithms. To provide a comprehensive comparison, the study experiments with altering the amount of data folds and presents Receiver Operating Characteristic (ROC) curves, F1 scores, and confusion matrices. A complete analysis on machine learning results, including methods for improved accuracy, will be presented on the poster. Also included is a thorough look at the results through an academic perspective, and how the features can be incorporated to improve the quality of online education. Furthermore, attention will be given to the methodology utilized on IBM Watson, underscoring the advantages of cloud-based platforms for creating AI/ML based predictive models.
Speaker
Speaker biography is not available.

Analyzing the Health of Lithium-ion Batteries through Heat Distribution and Thermal Modeling

Rohit Karthickeyan (John P Stevens High School, USA); Sushanth Balaraman (Edison High School, USA)

0
For decades, the primary focus in battery health assessment has been on metrics such as voltage levels and current flow. However, ThermoBatt shifts the lens towards the thermal attributes, a domain less explored but equally vital. ThermoBatt encompasses two innovative models: the first, a machine learning algorithm, predicts the State of Health (SOH) and Remaining Useful Life (RUL) of batteries by analyzing factors such as ambient temperature and usage cycles. The second, a real-time temperature distribution model, utilizes temperature data within charge/discharge cycles to simulate thermal behavior. This approach necessitates several assumptions, underscoring the pioneering nature of our exploration. ThermoBatt aims to deepen our understanding of how heat generation and distribution influence battery health and longevity. By bridging this knowledge gap, our work illuminates the interconnectedness of thermal dynamics with battery efficiency and endurance, paving the way for advancements in battery technology and sustainable energy solutions."
Speaker
Speaker biography is not available.

Exploring Cybersecurity Through Authenticating Wireless Communication for Mini Tank Robots

Andrew Y. Lu (Oyster River High School, USA)

6
Controlling robots using wireless methods is a big discovery. With Bluetooth, we can control robots to do things in places where people cannot access. But when using robots with wireless connections, other people can intercept and interfere with the robot. In this poster, I will introduce a summer STEM camp that was hosted by the University of New Hampshire to present my first cybersecurity exploration experience. In this program, I learned principles of Bluetooth and saw the security vulnerability of wireless communication via hands-on projects. Bluetooth is a short-range, wireless technology for exchanging data between devices over short distances. All the programs were run on a Ks0428 keyestudio Mini Tank Robot V2. All control logic and operation algorithms were implemented and debugged in an Arduino Integrated Development Environment (IDE). I used a BLE scanner to connect to the Bluetooth communication module on the Mini Tank Robot. I also learned how to use Bluetooth to send messages and code the robot to do different actions upon receiving the messages. I observed how an outside attack interfered with the robot through the Bluetooth module. If we don't add authentication to the receiving end, people can hack into our system and take control. When using the most basic and intro form of authentication, it could be easily cracked by using brute force. To examine more secure methods, I tried three security mechanisms in the Mini Tank Robot: (1) security message based authentication, (2) instant one-time passcode-based authentication, and (3) symmetric key cryptography based authentication. This eye-opening STEM experience inspires me to explore more cybersecurity issues in robot design. I would like to share my experience with other students who are also interested in STEM and technology.
Speaker
Speaker biography is not available.

Artificial Intelligence-Based Traffic Signal Control for Urban Transportation Systems

Minghan He and Pablito Lake (Rutgers Preparatory School, USA)

0
Extended Abstract Optimizing traffic signal control is crucial for the smooth operation of urban transportation systems. The challenge is to minimize vehicle delays and emissions, requiring a deep understanding of traffic dynamics at intersections. Traditional algorithms determined the number of vehicles stopping at an intersection by subtracting the flow of non-right-turning vehicles entering from the flow of vehicles exiting. Methods based on this are not taking the full dynamics of traffic behavior into consideration. Their performance is thus limited. Acknowledging the significance of this fundamental aspect, our research aims to introduce a novel algorithm. This innovative approach goes beyond conventional methods, integrating the dynamics of starting and stopping cars, with the goal of surpassing the limitations of previous solutions. Our work stems from the recognition of the pivotal role that efficient traffic signal control plays in urban transportation systems. With a focus on minimizing average vehicle delay within a specific timeframe, we aspire to make a meaningful contribution to the establishment of a sustainable and intelligent traffic management system. The motivation behind the project lies in the pursuit of a harmonious balance between rapid vehicle throughput and reduced environmental impact. Our approach involves designing a variety of solutions by blending traditional traffic engineering principles with our innovative algorithm. In the proposed model, history traffic data are used to train the traffic prediction for the future timeslots. Instantaneous intersection dynamics are also input into the model to trigger the parameter update and traffic prediction. This learn-and-predict AI process improves the model adaptation to changes. It tolerates local errors occurring in a short timeframe without sacrificing the overall performance. Prototyping plays a crucial role in testing and refining our approach. Using VISSIM software with AI methods, simulations have confirmed the effectiveness of our model. Additionally, community engagement activities, such as workshops and demonstrations, offer valuable real-world insights, influencing the development of our methodology. The prototype and simulations demonstrate promising results, highlighting a reduction in both average vehicle delay and the number of start-stop cycles. These findings align with our objectives of promoting efficient traffic flow and minimizing emissions. The positive feedback received during community engagement activities further confirms the real-world potential impact of our approach. The proposed algorithm can be refined based on additional real-world data. Collaborating with local authorities for potential implementation in select intersections is a key objective. The next phase will involve leveraging advanced deep learning algorithms to iteratively improve and evolve our model, ensuring it remains at the forefront of traffic signal optimization.
Speaker
Speaker biography is not available.

Solar's Future: Spin-coating Fabrication of Perovskite Solar Cells & Characterization of Effect of Interface Addition

Bowen Hou (USA)

0
Perovskite solar cells (PSC) have the potential to convert more sun energy than ever into electricity as it breaks traditional silicon solar panels' Shockley-Queisser limit. However, because of its extreme fragility and difficult fabrication process that usually requires a nitrogen glovebox, commercialized production of PSC wasn't yet industrialized on a large scale. This research aims to improve the fabrication by completing the fabrication process in ambient air (humidity above 50%) and investigating the effect of adding an interface layer to protect PSC from fast degradation. The solar cells in the control and the interface-added group will be characterized further to determine their efficiency and surface morphology.
Speaker
Speaker biography is not available.

Mirror Posture Detection Using Roll, Pitch, and Yaw Angles and an Error Equation

Chongwei Dai (PRISMS Research, USA)

0
Indoor exercises, which can enhance muscle strength and cardiovascular function, are becoming increasingly popular, but people are continuously suffering from injuries due to incorrect exercise postures. The method of letting professional trainers take direct observation has its limitations. Therefore, a new approach will be used with the emergence of newer technologies: mirror to detect postures. The mirror is designed to detect incorrect postures in patients while exercising, which is attained through an input of selective body exercises using coordinates and the algorithm that detects the error in the posture. The customers of the mirror encompass a broad spectrum of individuals seeking personalized rehabilitation, including those working from home, rehabilitation patients, and individuals passionate about their health and well-being. Additionally, the mirror extends its appeal to corporate wellness programs, further diversifying the customer base. The mirror's unique focus on rehabilitation sets it apart from traditional fitness mirrors, appealing to those with specific pain points in the rehabilitation process. The product addresses the challenges associated with sedentary lifestyles and the need for efficient rehabilitation solutions for individuals working from home. Rehabilitation patients, a crucial target segment, find value in the mirror's ability to simplify the complex rehabilitation process, providing motivation, expert monitoring, and efficient at-home exercises.
Speaker
Speaker biography is not available.

Rolling Across the Continents: Phylogenetic Relationships of the Isopoda

Evan Kang (Princeton High School, USA)

0
Isopods are a very diverse group of crustaceans, having colonized habitats from the ocean floor to treetops in tropical forests. Terrestrial isopods comprise a major portion of this diversity with approximately 5,000 species, yet their evolutionary relationships have not been widely examined. With genetic sequencing techniques becoming more widely available, this group of students in the Princeton High School Research Program set out to determine how isopods have evolved and diverged since the Cretaceous Period, when the earliest terrestrial isopod fossils were set in amber. Our primary focus has been a portion of the cytochrome c oxidase subunit I gene, which is frequently used to differentiate species via DNA barcoding. DNA extraction was conducted through the use of a Quick-DNA Tissue/Insect Miniprep Kit (Zymo Research), followed by polymerase chain reaction (PCR) and Sanger sequencing, after which sequences were compared through the online program ClustalW2 in order to generate a phylogenetic tree. Preliminary results suggest that much of the accepted phylogeny in terrestrial Isopoda needs revision, because many of the taxonomic classifications based on morphology do not align with the results of our genetic investigation. This suggests that our current understandings of isopod evolution are incorrect and that further genetic investigation is warranted to better understand this understudied arthropod group through the sequence of additional gene fragments and comparisons of living species to samples of fossil isopods to establish a molecular clock that will potentially inform when different groups of terrestrial isopods may have diverged.
Speaker
Speaker biography is not available.

Dynamic Duos: Investigating the Composition of Powerful Pairs in Basketball with Network Analysis

Neel Iyer (High School, USA)

0
With the increase of analytics in basketball, research has started to focus on team chemistry via novel player roles that dynamically emerge within teams. According to (Fewell et al., 2012), basketball teams can be represented as networks and explored to find relationships between the individual players and team chemistry. In addition, related research (Hedquist, 2022) also challenges the ability of traditional player roles (such as point guards, shooting guards, etc.) to capture the essence of the roles of players in a team. As a result, traditional views of team composition are limited and don't provide enough insight to managers when optimizing for team dynamics. This paper examines the composition of high-performing duos to better capture the nuanced view of player importance from a team perspective. The methodology of this paper consisted of creating network diagrams (players as nodes, passes as edges) for 14 out of the 16 playoff teams for the 2021-2022 NBA season using NetworkX. Our data was sourced using the NBA API and SportsReference. Once the networks were created, we computed a weighted centrality measure for each player, taking into account a player's betweenness centrality (a measure of a player's impact on the flow through the team), a player's player efficiency rating (a common statistical measure for a player's performance) and a player's assist ratio (measuring their indirect contribution to the team). With these measures, we ordered the players to select the player with the highest weighted centrality and their neighbor with the next highest centrality measure. We called these pairs high-performing duos. We then used k-means clustering to identify the broad player roles predominant in these duos. Our findings show that while 50% of duos consisted of a high-value player (evaluated using Value Over Replacement Player) and strong assist ratios as well as shooting abilities, 75% of duos were characterized by the presence of an agile support player. This demonstrates that examining individual high-performers does not provide as nuanced a view as taking into account their dynamic with other support players. Examining duos is one way to provide insight into the team dynamics that may exist within team networks. Findings from this research can be used by managers as well as analysts who are looking to better understand and estimate player contributions and importance from the perspective of team dynamics.
Speaker
Speaker biography is not available.

Trichotillomania Video Detection and Reduction Therapy

Rachel Guhathakurta (USA)

0
Trichotillomania is a hair-pulling disorder that involves irresistible urges to pull hair from the scalp, eyebrows, eyelashes, and other areas. It ranges in severity, from a mild nervous habit to being physically, emotionally, and socially debilitating. Reinforcement training can be effective in stopping unwanted behaviors. This paper outlines the creation of a program that utilizes machine learning and Tensorflow to indicate to the user that they are hair-pulling. The system is especially effective because it can legitimately hold the user accountable through video detection, instead of relying on users reporting their hair pulls. Images of hair pulling in seven different locations of the head (top of the head, hairline, left of the head, etc.) were divided into seven folders, with 900 images in each folder. Postures one, five, and six performed with 75.0, 83.9, and 81.2 percent accuracy respectively. Within each subset of data, lighting and backgrounds were diversified. This technology can apply to numerous other damaging habits, such as nail biting and scratching, thus providing an additional approach to destructive habit reduction therapies.
Speaker
Speaker biography is not available.

DIY pH Indicator

Dia G Sharma (Middle School, USA)

0
pH is a huge part of people's lives all over the world. Ensuring the water you drink isn't too acidic or too basic is crucial for your health. My project will review the importance and the basic information about pH, the problems associated with it, and how to test the pH of different liquids. There is an extraordinarily simple way to make your own pH indicator that works by color signals. This is done with only two ingredients that are commonly found at any local supermarket - red cabbage, and water! I will teach how to make a pH indicator and the impact it can have on countless lives throughout the globe.
Speaker
Speaker biography is not available.

Using UV Lights to Extend the Lives of Strawberries

Leo Dobrinsky (USA)

0
In 2021, 9.2 million tons of strawberries were produced globally; out of those, 5.8 million were wasted. This is no surprise considering that, unlike other fruits, strawberry's shelf life is only up to seven days (if treated right). This does not include the time it takes to transfer them to the store, and then to the consumer. My solution is to extend the life of strawberries by using ultraviolet light. Typically, stores try extending the life of strawberries by using temperature and humidity control. This is not very successful, which is why I have decided to use UV-C light, which kills harmful bacteria, mold, and yeast. Not only will this prolong the life of strawberries, but it will also preserve the freshness, flavor, and nutrients. In many third-world countries, people who do not have refrigerators would benefit from this project even more. I also plan to use UV-B light which has similar properties as UV-C light but has a different wavelength. This will not only add extra storage life to the strawberries but also will give them extra nutrients (The US Department of Agriculture showed it by enhancing the quality of cabbage). My approach is complex. I plan to combine different UV lights, visible light, and environmental factors, such as humidity and temperature, to experiment with the best scenario for strawberry preservation. This might create a protocol for the future of strawberry storage. Due to this work, I hope that supermarkets will have delicious strawberries to store and sell long after harvest. Thus, my project will reduce the multi-billion-dollar waste of both money and food; the cost of fruit for consumers and farmers alike; and lastly it will minimize environmental damage to our planet. This is not just about saving strawberries; it is about creating a model for a more sustainable food system.
Speaker
Speaker biography is not available.

Development of an Alzheimer's Resource Website for Young Students, with Information and Python Functions for Data Manipulation, Machine Learning, and Brain Image Manipulation

Anabel Sha (Poolesville High School, USA); Amy Watanabe (Montgomery Blair High School, USA)

1
Early Onset Alzheimer's disease (EOAD) is a rare but devastating form of Alzheimer's disease that impacts younger adults, generally 60 years of age or less. It is thought to affect between 220,000 and 640,000 Americans, beginning between age 45 and 64. Most of such cases do not run in families and hence can appear unexpectedly in any adult and impact the family in subtle ways. We set out to create a resource for high school students to understand what the disease is, what are its symptoms, and resources to help their loved ones. Additionally, we have developed a library of Python functions to analyze publicly available data sources, create Machine Learning models, and display and analyze brain images. We hope to continue developing this resource and convert it into a public website.
Speaker
Speaker biography is not available.

AI-powered firefighting robot to manage high-risk situations while improving standard fire response time -Robot FireX

Siyona Lathar (School, USA)

2
The purpose of the firefighting robot is to detect fires in nearby areas and assist firefighters in dangerous situations by helping them quickly extinguish the fire. It can also prevent household fires which occur in an average of 358,500 homes each year (NFPA). More than 3,000 Americans die in fires each year (FEMA). Fire Response Time (FRT) becomes very crucial for such situations which really help save lives and improve the chances of overall damage control. According to NFPA (National Fire Protection Association) Standard 1710 establishes an 80 second "turnout time" and 240 second "travel time" which together makes 5 minutes and 20 seconds. FRT is dependent on how soon it has been reported and based on when the Standard 1710 is triggered to meet the deadlines. In fire situations many accidents are reported by affected residents at a very later stage or in some situations the incidents are reported by people outside the premises, since no humans are inside. This is where I saw Artificial Intelligence, Robotics and its powerful sensor based vision capabilities coming into play to create the ‘Robot FireX'. Idea is to create a reliable system that can accurately locate and sense either of three signals: excessive heat, smoke and the sound of a fire alarm. By recognizing any of these signals, AI is a crucial first step for the ‘Robot FireX' to capture the signal and immediately send an alert to a mobile number, simultaneously start moving in the direction of fire to extinguish it with the help of water or carbon dioxide spray. The goal is to put out the fire immediately and as effectively as possible, minimize property damage and reduce the amount of lives lost each year in fire incidents. The ‘Robot FireX' can be designed in various sizes depending on the situation such as homes or industrial sites. Plus more AI specific capabilities can be activated in ‘Robot FireX' to bring further refinements, so that it will be able to work in the unpredictable environments to spot high-risks like gas leaks and send improved alerts such as live videos, images of the accidents and send GPS coordinates of the site to the fire department.It can also become a very crucial aid to be used as a First Response Team. When used by firefighters, it can prove to be an extremely valuable tool in evaluating the entire scene and eliminating any threats before bringing the whole situation under control.
Speaker
Speaker biography is not available.

A Mathematical Proof of a Card Trick and its Algorithmic Applications in Computer Science

Rishi Balaji (Stanford Online High School, USA)

0
The goal of this project is to use mathematical principles to show how a ‘magic' card trick works. The card trick involves a specific number of cards transitioning between a deck and being laid out in a grid-like format. It then utilizes a set of repeated steps to move the spectator's card to the middle of the deck, so the performer can reveal the center card to the audience. The paper uses similar math-based ideas to further provide a generalization for the trick, proving that it should work for any amount of cards, given some restricting guidelines. Using the card trick as a basis, the project further expands on concepts inspired by the trick for more practical applications, such as how it can be be used in fields like computer science, such as performing operations such as sorting and transitioning arrays between one and two dimensions, similar to the process in the card trick. This can show that even using simple, ordinary things like a card trick can result in new possibilities for more advanced topics, which can be useful in areas such as those in STEM.
Speaker
Speaker biography is not available.

Protecting Shorelines with Triply Periodic Minimal Surface (TPMS) Inspired Breakwaters

Alex Yang and Michael Wen (USA)

1
Breakwaters have been used for millennia to reduce wave impact. Breakwaters are coastal structures that aim to disrupt waves by reducing their wave energy, and their abrasive impact on the shoreline. The force generated from waves gradually eroded shorelines. Traditional breakwaters have been proven useful for protecting the shorelines, yet the drawbacks of such breakwaters, including their impact on the surrounding ecological system, difficulties in maintenance, and interference with fish migration, cannot be ignored. Breakwater designs have remained relatively static, with many breakwaters comprising mound or wall-based configurations. This study aims to innovate existing breakwater architecture by exploring the use of Triply Periodic Minimal Surfaces (TPMS) structures as breakwaters. TPMS shapes are three-dimensional periodic manifolds chosen for their mathematical simplicity, mechanical strength[1], cost-effectiveness, and ecologic friendliness. This research employs Computational Fluid Dynamics (CFD) simulation methods to explore the effectiveness of different TPMS structures in reducing the amplitude and group velocity of incoming waves. The effectiveness of each structure is compared with other TPMS structures with modified design parameters as well as with certain traditional breakwater designs with identical height and volume, namely a commonly deployed lattice design[3]. OpenFoam software is used as the primary computational tool to simulate wave impact with OlaFlow[4] being the primary solver. MSLattice[2] is employed in the creation of TPMS structures. This investigation aims to explore the feasibility of TPMS breakwater and give rise to a new generation of breakwater architecture incorporating TPMS structures. [1] Oraib Al-Ketan, Dong-Wook Lee, Reza Rowshan, Rashid K. Abu Al-Rub, Functionally graded and multi-morphology sheet TPMS lattices: Design, manufacturing, and mechanical properties, Journal of the Mechanical Behavior of Biomedical Materials, Volume 102, 2020. [2] Alketan, Oraib & Abu Al-Rub, Rashid. (2020). MSLattice: A free software for generating uniform and graded lattices based on triply periodic minimal surfaces. Material Design & Processing Communications. 3. 10.1002/mdp2.205. [3] Dang, B. Nguyen-Van, V. Tran, P. Wahab, M. Lee, J. Hackl, K. Nguyen-Xuan, H.(2022, April). Mechanical and hydrodynamic characteristics of emerged porous Gyroid breakwaters based on triply periodic minimal surfaces [4] Higuera, P. (2018, June). CFD for waves
Speaker
Speaker biography is not available.

2D to 3D spaces using straight lines

Juliette Hancock (Goetz Middle School, Jackson, NJ, USA); Jeanine Hancock (Goetz Middle School Jackson, NJ USA, USA)

0
2D to 3D Curves using Straight Lines Authors: Juliette Hancock, Jeanine Hancock, Valentina Sandoval and Caleb Sandoval Hyperbolic Paraboloid Our project explores how to create curves using straight lines in two-dimensional and three-dimensional spaces. Two-dimensional Parabolic Curve In two dimensions, we are going to use pencil and paper to create parabolic curves using straight lines. A parabolic curve is a U-shaped curve that is formed by intersecting lines to equally spaced points. Then, we are going to sew colored string on the parabolic curves to create string art. Three-dimensional Hyperbolic Paraboloid We plan to expand our two-dimensional projects into three dimensions by creating two different types of hyperbolic paraboloids. A hyperbolic paraboloid is a saddle-shaped structure that has both convex and concave curves. In the first project, we are going to use long coffee stirrers to create a hyperbolic paraboloid sculpture that is embedded in a tetrahedron, or triangular pyramid. In the other, we are going to use sliceforms to create hyperbolic paraboloids. Applications Hyperbolic paraboloids are used in everyday life. We see examples of it in food, bridges, roofs, and apparel. ● Pringles potato chips use the hyperbolic paraboloid shape to perfectly stack their chips in a cylinder, which protects their chips from breaking and uses less shelf space. It also gives the consumers more chips per container. ● In architecture, many structures use hyperbolic paraboloids. A church in Jackson NJ (our hometown), St. Aloysius, as cited in Architect Magazine, has a hyperbolic paraboloid roof. This is often used as an inexpensive solution to long-span roof requirements, such as in sports arenas. The roof at the church has elegant and fluid lines like those you might see in a fabric tent. The "tent" of St. Aloysius church is made from standing seam metal panels. It is a beautiful structure to view from inside and outside (convex and concave). ● In apparel, we also see hyperbolic paraboloids on two sides of a tricorn hat (pirate hat), and a nun's Wimple (hat). What We Learned We learned what hyperbolic paraboloids are, how to create them in various ways, and practical applications of them. We discovered the difference between a two-dimensional parabolic curve and a three-dimensional hyperbolic paraboloid. We found out the definitions of convex and concave in relation to hyperbolic paraboloids.
Speaker
Speaker biography is not available.

Spatial Tissue Differentiation in Bioprinted Organ Constructs

Tyler Wu (USA)

0
Bioprinting combines 3D printing with cell biology and materials science to create tissues and organs from scratch, which can then be used in transplantation and drug testing. Much like how a 3D printer deposits a plastic filament into a 3D structure, a bioprinter deposits a cell-laden bioink to form an organ structure. Upon fabrication, cells must differentiate into specific cell types for the organ to function. Current research focuses on differentiating stem cells into specific types of tissues. However, this approach overlooks that organs are not made of a single tissue type; they consist of a consortium of different tissues working together to complete a specific function. While knowledge of tissue differentiation can serve as a foundation for bioprinting, it is crucial to expand and apply this knowledge to the level of entire organs—to realize a future of readily accessible 3D printed organs, it is imperative to be able to control the spatial distribution of tissues. To achieve such a future, this project summarizes the different strategies used to direct spatial cell differentiation as well as important mechanical, chemical, and electrical bioink properties that can be manipulated. By reviewing current studies related to controlling cell differentiation in bioprinted constructs and evaluating the advantages and limitations of each technique, the aim is to identify shortcomings in current technology and to provide recommendations for areas of further focus. Novel methods are required to manipulate cells effectively, refine tissue organization, and control cell differentiation, and by regulating the distribution of specific cell types within an organ, it becomes feasible to fabricate organs with enhanced functionality.
Speaker
Speaker biography is not available.

Community Building: The Importance of STEAM

Sowmya Natarajan (Georgetown Day School, USA)

0
In 2022, I wrote an IEEE paper on my experience tutoring two young girls in math and the importance of women in STEAM. After mentoring them for 3 years and building from those experiences, this paper discusses my involvement in teaching and holding STEAM festivals with youth in Washington DC who were primarily African-American and members of the Navajo Nation in Farmington, New Mexico. This paper explores my lessons learned in tutoring two younger girls in math for three years. It then discusses how these lessons were applied in creating two major STEAM camps/festivals supporting minority communities in Washington DC and New Mexico. The paper finally explores the powers of the arts to build capacity and create a learning environment to support students in their educational journey in STEAM.
Speaker
Speaker biography is not available.

The Math Behind Machine Learning

Vas MV Grabarz (USA)

0
Systems of equations and matrices go hand-in-hand when representing data and its transformations. For example, each row of a matrix demonstrates a data point, whereas each column contains an attribute of the data. In machine learning, vectors can be readily used to represent observations of data. Vector operations are also a viable means of imitating neural networks. Math topics in artificial intelligence will be covered, and different examples of linear algebra concepts will be conducted in python. Various mathematical equations that are usually unnoticed will be revealed, showing the sheer importance of linear algebra in the realm of machine learning.
Speaker
Speaker biography is not available.

The Next Level of Video Game Cheating

Isaac Newell and Jacob F Hackman (Holy Ghost Prep, USA)

0
Anti-cheat systems have undergone significant advancements, driven by the escalating arms race between developers and cheaters. Riot Games' Vanguard stands as a prime example of this evolution, employing kernel-level monitoring to detect and prevent cheats in their popular game titles. However, as anti-cheat technology becomes more sophisticated, so do the methods of circumvention. AI-powered external cheats have emerged as a formidable challenge, utilizing machine learning algorithms, and the use of external hardware to adapt and evade detection mechanisms. These cheats leverage intricate patterns and behaviors to mimic legitimate player actions, making them harder to identify and mitigate. We aim to create a basic cheat for a video game, employing AI, and an external microcontroller. To achieve this, we will train an AI algorithm to recognize and locate targets within the game environment, such as enemy players. Once a target is identified, we will communicate information to the microcontroller. This microcontroller can turn this information into simulated movements indistinguishable from a real mouse, allowing the player's crosshair to automatically aim at the detected targets. Another possibility is to simulate mouse movements through a piece of software on the computer, although this also entails the side effect of an additional chance of detection. This combination of AI-powered target detection and mouse manipulation creates a cheat that can provide a significant advantage in gaming scenarios. Moreover, by not interacting with game memory, and using an external device to send mouse movements, such cheats can be near impossible to detect and counteract by even advanced anti-cheat systems.
Speaker
Speaker biography is not available.

The impact of sleep deprivation on cognitive function in jumping spiders

Anja Gatzke and Erin Kim (Princeton High School, USA)

0
Chronic sleep deprivation is known to be damaging to cognitive functions, specifically to memory consolidation in the hippocampus. However this research project aims to discover the immediate effects of sleep deprivation on cognitive function in jumping spiders. It is hypothesized that sleep deprivation will cause a decline in cognitive function as a result of the increase of beta amyloid protein plaques, due to dysfunction in breaking down amyloid precursors involved in development of nerve cells, leading to declines in memory (Blumberg et al., 2022). Jumping spiders were used as they have similar circadian rhythms to humans and were sleep deprived through light and sound distractions throughout the night. To test cognitive function two methods were used and reaction time was recorded. The first test involved simulating a predator in the jumping spider's habitat and the second was testing spatial memory and reasoning by taking the spiders out of the normal containers for five minutes. In test one the average reaction time increased from 2.72 seconds to 14.81 seconds after a night of sleep deprivation. Similar data was found in the second test with the average time taken to return to the web increased from 121.05 seconds in the control to 235.07 seconds after sleep deprivation. Overall, it was determined that sleep deprivation, even in small quantities, was harmful not only to the cognitive function of jumping spiders but to their development as well. The implications of this research serve as a means to determine how a single night of sleep deprivation impacts cognition to provide the field with more information on just how harmful sleep deprivation is in small quantities.
Speaker
Speaker biography is not available.

How much screen time should kids have?

Zuko A Ranganathan (Hart Magnet School, Stamford CT, USA)

0
These days, a big topic of discussion in many families is how much screen-time should the kids get, and how much should parents control their kids screen-time. This is quite a tricky question, because kids are quite attracted to gadgets, and while these gadgets can help the kids in their social and academic lives in various ways, they can also hurt their cognitive development. In this poster, I will talk about the pros and cons of screen-time for kids. I will explore what is the appropriate amount of screen-time for kids of different ages. Finally, I will give some tips for kids to use their gadgets in a fun, but safe, way.
Speaker
Speaker biography is not available.

K-12 Poster Session: Pull-Up Nets

Sanaa Jones (USA)

0
The first time I saw a pull-up net was when I started looking at topics for this conference. I watched a video of pull-up nets in motion and was amazed by how a flat 2D figure could be pulled together to become a 3D shape. For this project, I want to explore the world of pull-up nets! I want to start by creating pull-up nets for cubes, pyramids, and triangular and rectangular prisms. Next, I would like to analyze how many different nets can be created for a cube and other 3D shapes to see if there is a pattern for creating pull-up nets. Finally, I would like to create pull-up nets for the five platonic solids.
Speaker
Speaker biography is not available.

Da Vinci Bridge: Past, Present, and Future

Richard H Evans (USA)

0
In 1502, Leonardo da Vinci responded to a request to provide a bridge design that connected Istanbul with Galata. Even though his design was not selected, his bridge concept has become very popular. Many researchers and organizations have replicated Leonardo's design to determine its viability. Through my design, I will discuss the strength of such a bridge and whether this design should be considered in future bridge designs. I will build my version of a da Vinci bridge and demonstrate the strength of my design by placing objects on it. I will discuss why this bridge concept is able to sustain considerable weight. I will explore the decision made in 1502. Should they have selected da Vinci's bridge design to connect Istanbul with Galata? Should we consider elements of the da Vinci bridge design in future bridge designs? I will explore various answers to these questions,
Speaker
Speaker biography is not available.

Improving the C++ Experience with Transpilers

Stephen E Hellings (Holy Ghost Preparatory School, USA)

0
Identification of Problem: Language C++ is a popular programming language aimed at performant and extensible programming. Due to the amount of features, modern C++ is growing more complex as time goes on. Rationale: Although offering capabilities that cater towards advanced users, it remains increasingly complex and hard to interpret for those beginning to use it. Approach: A new programming dialect, compiled to C++ through a "transpiler", is based on another programming language and relies on an Abstract Syntax Tree (AST). It was tested among multiple persons, all of whom use C++ on a daily or frequent basis. Involved also with testing are multiple new programmers who do not know C++ or are beginning to learn it. Additional Information: The language the dialect is based off of, Python, provides a syntax more comfortable to beginner programmers. Extending the functionality of Python with the features of C++ in a basic syntax will provide a comfortable experience to new programmers who also seek to learn the concepts of C++. Results: All advanced "testers" who have carried out the experimental program have reported no issues and have stated it provides all features necessary for their use. All beginners used in the experiment's testing have reported that it flattens the learning curve of C++ and provides a comfortable programming experience. Additional Information: The dialect and transpiler "Kurakura" has a dedicated website "https://kurakura.firebirds.win/" in order to see a pre-release version available to the public. A private version available to internal members that is likely to be stable is periodically released to the public.
Speaker
Speaker biography is not available.

Artificial intelligence approach for predicting class I major histocompatibility complex epitope presentation and neo-epitope immunogenicity

Kathryn Jung (USA)

0
T cells help eliminate pathogens present in infected cells and help B cells make better and different kinds of antibodies to protect against extracellular microbes and toxic molecules. Because T cells cannot see the inside of cells to identify ones that ingested pathogens or are synthesizing viral or mutant proteins, antigen presentation systems evolved, displaying on the cell surface information about various antigens synthesized or ingested in cells. The systems provide a way to monitor major subcellular compartments where pathogens are present and report their presence to the appropriate T cells. Endogenously synthesized antigens in the cytosol of all cells are presented to CD8+ T cells as peptides bound to major histocompatibility complex (MHC) class I molecules, thereby allowing identification and elimination of infected cells or cancer cells by the CD8+ lymphocytes. Thus, identification of non-genetically encoded peptides, or neo-epitopes, eliciting an adaptive immune response is important to develop patient-specific cancer vaccines. However, experimental process of validating candidate neo-epitopes is very resource-intensive, and a large portion of candidates are found to be non-immunogenic, making the identification of successful neo-epitopes difficult and time-consuming. A recent study showed that the BigMHC method, composed of seven pan-allelic deep neural networks trained on peptide-MHC eluted ligand data from mass spectrometry assays and transfer learned on data from assays of antigen-specific immune response, significantly improves the prediction of epitope presentation on a test set of 45,409 MHC ligands among 900,592 random negatives compared to other four state-of-the-art classifiers. It also showed that after transfer learning on immunogenicity data, the precision of BigMHC is greater than several other state-of-the-art models in identifying immunogenic neo-epitopes, making BigMHC effective in clinical settings. I noticed that there is a multi-allelic dataset that comes from MHC-Flurry 2.0 consisting of MHC Class I peptides each with a bag of six alleles used in BigMHC method; in the single-allelic data, each peptide only consists of one allele instead of multi-alleles. However, even in the single-allelic data duplicates are possible: there may be two of the exact same peptides, but one belongs to one allele while the other belongs to a different allele. I set out examining such duplicates with my custom code and found that 3.142% of the single-allelic data are duplicates, raising the possibility that BigMHC method's test results is unaffected by the duplicates. As expected, there was no notable differences based on my trained models. The examination and result raise another possibility that implementing multiple instance learning (MIL) may be advantageous in immunogenicity prediction because it considers multiple MHC alleles associated with a given peptide, as observed in the multi-allelic dataset whereas in single-instance learning, each peptide is associated with a single label (in this case, whether it elicits an active immune response), which may not fully capture the complexity of MHC-peptide interactions due to the high level of polymorphism in MHC class I molecules. If the approach succeeds, MIL will help further enhance the accuracy and reliability of BigMHC method potentially more beneficial in clinical settings.
Speaker
Speaker biography is not available.

A New Statistical Measure of NFL Talent

Ezra Sol Lerman (USA)

0
What does it take to create a statistical approach for measuring the relative performance of pro football athletes? What can be improved upon from the latest advances in football statistics and analytics? This project will be focused on developing a new way to compare and contrast the on-field performances in the NFL to help differentiate between levels of players. The goal is a method of interpreting statistics that better shows how valuable players are to their teams, and their relative performance compared to other players. The approach will blend ideas from already-existing advanced analytics, along with new algorithms that incorporate even more facets of the game. I plan to study various articles about how some professional statisticians have developed their own advanced algorithms to get an idea of the process behind creating such models and how they can be improved. Ultimately, I may focus on one particular position for this project, but over time I would want to expand the research to encompass all positions in the game. It would exciting if the techniques developed could eventually be used by teams to guide their draft and free agency decisions, combining statistics across the entire team to predict best fit and elevating the performance of the entire team. I will also explore how artificial intelligence algorithms can enhance the accuracy and predictability.
Speaker
Speaker biography is not available.

Enhanced Low-Power, Low-cost, and Very High Accuracy Smart Parking Solution for Urban Areas

Vivek Pragada (Central Bucks South High School, USA)

0
United Nations projects that 70% of the global population will be living in urban areas by 2050. This will further exacerbate the already challenging issue of urban parking, where it is currently estimated that 45% of total traffic congestion is due to drivers looking for parking. In our prior work, we have proposed a cross-sensor-based urban parking solution consisting of a smart parking server (SPS) and smart parking units (SPU). Each SPU utilizes a magnetometer sensor and an LPWA connectivity module. This was accomplished by configuring multiple thresholds in each of the SPUs, such as occupancy threshold, adjacency threshold, opposite threshold, etc. corresponding to various automobile makes/models. While these thresholds helped significantly in the accurate determination of parking spot occupancy, to be able to support various automobile makes and models including electrical vehicles (EVs) that tend to have lower ferrous content in their chassis, based on our further analysis we have come to realize that these configuring various thresholds accurately is challenging. The configuration of appropriate thresholds for occupancy and adjacent thresholds is critical in achieving high accuracy. To reduce the complexity of being able to determine and configure appropriate thresholds that could be sensitive to various automobile makes and models, and to be able to handle all practical parking events - we have developed an enhanced framework that demonstrated higher accuracy while dramatically reducing the sensitivity to different automobile makes and models incl. EVs. In this enhanced approach, each SPU is configured with only a single threshold T. T is chosen to be less than the change that would be caused by a vehicle with the lowest ferrous content being parked in an adjacent spot. The interference from a car parking in an adjacent spot is much greater than most environmental fluctuations, allowing for T to be between these two values, just high enough to capture anything that could correspond to a parking event. The built-in redundancy of the system will enable the other SPUs around to "correct" it as they would not detect any change greater than T. Whenever an SPU's reading changes by some Δx > T within a specified duration Δt, it sends Δx to the SPS. All computation and deduction can be done server-side, as will be illustrated with various caseworks, enabling each SPU to be extremely simple. Because no additional processing is needed at the SPU, thereby reducing power consumption and lowering the cost, the proposed approach is much more efficient than current methods involving onboard filtering, processing, and sensor-level determination. The built-in redundancy of the system can help lessen the effects of an SPU malfunctioning. This enhanced framework also enables the magnetometer to be extremely low power because it only requires a minimal sampling rate to check every Δt seconds. Rather than analyzing a complex function for each parking event, it only needs to look at the overall change (displacement) in magnitude over duration Δt. This greatly reduces the power required and helps prevent false readings from possible momentary spikes in magnetic flux.
Speaker
Speaker biography is not available.

Detecting Elementary Particles With a Homemade Cloud Chamber

Judah Lerman (Princeton Middle School, USA)

0
It is amazing that the universe contains elementary particles that are too small to see, not even with the most powerful microscopes. But how do we know such particles really exist and how can we prove this at home without expensive science equipment? I've seen a cloud chamber in a science museum that illuminated the pathway of tiny particles bombarding the earth from outer space. For this project, I will explore how to build a cloud chamber at home to detect the presence of elementary particles and record the evidence. What types of particles can be detected? What makes a quality cloud chamber, and how does a homemade cloud chamber compare to professional ones at museums and science labs? What are some of the applications of cloud chambers, and how do they help us understand the universe we live in?
Speaker
Speaker biography is not available.

Eco-Friendly Remediation of PFOA Contamination using BTs-ZVI (Banana Peel, Tapioca - Zero Valent Iron)

Emily Jooah Lee (The Lawrenceville School, USA)

0
Perfluoroalkyl and polyfluoroalkyl substances (PFASs) have become a significant environmental concern due to their widespread use and persistence. This study addresses the emerging issue of PFAS contamination, focusing on perfluorooctanoic acid (PFOA), a particularly troublesome compound. PFASs are found not only in drinking water, where they adsorb onto microplastics, but also in various cosmetic products, presenting a multifaceted exposure risk. Despite ongoing regulatory developments by agencies such as the U.S. Environmental Protection Agency (USEPA), the prevalence of PFASs, especially PFOA, in drinking water remains alarming. New Jersey, in particular, stands out as a hotspot for contamination, affecting over 500,000 individuals. In response to this critical issue, our research aims to propose a sustainable and efficient method for the removal of PFOA from drinking water. Traditional treatment technologies have proven ineffective against PFAS removal, necessitating the exploration of advanced oxidation processes. PFOA, classified as a "forever chemical" due to its persistent nature, poses a unique challenge for degradation. Previous attempts using microbial species demonstrated limited success, highlighting the need for alternative methods. The research focuses on the application of advanced oxidation processes, specifically UV irradiation under varying conditions, as a promising avenue for PFOA removal. The study employs a systematic approach to optimize the efficiency of UV-based oxidation, considering factors such as irradiation intensity, duration, and environmental conditions. Preliminary findings suggest the potential of this method to address the challenges posed by PFOA persistence and resistance to conventional treatment strategies. This research contributes to the growing body of knowledge on PFAS removal techniques and underscores the importance of developing sustainable solutions to combat emerging environmental contaminants. As the demand for effective treatment technologies rises, our findings aim to inform future strategies for mitigating the impact of PFAS contamination on drinking water quality.
Speaker
Speaker biography is not available.

Analyzing the Influence of Low-Frequency Induced Vibrations on the Tensile Strength of 3D Printed Materials

Ayati Vyas (San Jose State University, USA); Shreyas Ravada (Monta Vista High School, USA); Sohail Zaidi (San Jose State University, USA)

0
3D printing has evolved into a mature technology finding widespread industrial applications. The predominant method, fused deposition modeling (FDM), involves layer-by-layer deposition of melted thermoplastic to achieve the desired component shape. Common printing materials include polylactic acid (PLA), acrylonitrile butadiene styrene (ABS), polyethylene terephthalate glycol (PETG), and thermoplastic polyurethane (TPU). While these materials possess excellent thermal and mechanical properties for producing high-quality specimens, there is still room for improvement in both efficiency and overall strength. Experiments indicate that minimizing layer thickness and raster width enhances the tensile strength of printed material. Additionally, 3D printing is susceptible to external vibrations leading to failed prints with undesirable wavy patterns, known as "ringing". In contrast, a study in 2018 demonstrated a three-order increase in material flow rate with high-amplitude ultrasonic vibrations to the ejecting nozzle. The objective of the current study is to validate the concept that deliberately induced vibrations during 3D printing will impact tensile strength. The proposed hypothesis suggests that low-frequency induced vibrations will decrease porosity, consequently increasing the overall tensile strength of the material. To conduct the research, a Tronxy X5SA 3D printer was utilized, with its printing stage modified to incorporate a vibrating mechanism. An Ocity Vibration Rumble Motor (B07FL7HQ7Y) was mounted on the stage holding the ejecting nozzle and operated between 2000-3000 rpm at 3-6 V. Vibrating frequencies were measured between 3 to 6 Hz. Dog bone specimens conforming to the ASTM Type I standard, were printed from PLA and ABS plastic with infill levels of 100%, 75%, 50%, and 30%. Specimens were printed with and without vibrations, using variations in infill level And vibrations as the main parameters to evaluate the impact on tensile strength. To accurately determine the porosity of each specimen, the Archimedes approach was adopted, submerging specimens in water and measuring the displaced liquid volume along with the weight of the dry and wet specimens. Preliminary experimental results support our hypothesis. It was found that, for both PLA and BAS materials, increased vibration frequency (from 3 to 5 Hz) reduced porosity by 3-4% for all infill levels except 100% fill case. Dog bone specimens were tested for tensile strength. Information on each specimen including area, maximum load at rupture, and strain percentage at the break point, was collected. Results indicate that for a 100% infill specimen with a 3 Hz induced frequency, there was a 17.20% increase in maximum stress observed. For a 60% infill specimen, the corresponding increase was about 15.7%. Further analysis is under progress, and the final presentation will include in-depth results for this investigation.
Speaker
Speaker biography is not available.

Unlocking the Potential: Cloud-Based IBM Platform's role in Advancing Machine Learning Models for Early Heart Disease Detection

Advika Arya (American High School, USA); Sohail Zaidi (San Jose State University, USA)

0
According to the World Health Organization, cardiovascular issues stand as the leading cause of death. In recent years, an increasing number of individuals have been affected by heart problems, leading to a surge in heart disease. The conventional method for diagnosing heart involves coronary angioplasty, a precise yet an invasive surgical procedure. Our hypothesis suggests that the integration of AI/Machine Learning techniques can enhance heart disease predictions, improving healthcare by detecting an individual's risk without resorting to surgery. This study utilized data from UC Irvine repository incorporating 13 features: age, sex, chest pain type, resting blood pressure, serum cholesterol, fasting blood pressure, resting electrocardiographic results, maximum heart rates, oldpeak, exercise induced angina, slope of peak exercise segment, number of major vessels, and thal. Analysis involved 303 patients from several hospitals leveraging the IBM platform to deploy multiple algorithms to develop predictive models. Snap logistic regression, extra trees classifier, and logistic regression with and without enhancements were employed. Achieving high accuracy levels, all above 80 percent, each algorithm highlighted the percentage contribution of significant features to model predictions. For instance, chest pain (100%), thal (97%), exercise induced angina (92%), number of major vessels (87%), oldpeak (67%), maximum heart rate (62%), and age (47%) were identified as pivotal in the extra trees classifier model.Strong correlations among various features in predicting heart disease with high accuracy were observed across all algorithms. The study also explores variations in results by changing the number of folds in the data, presenting ROC curves, F1 score, and confusion matrices for comparative analysis. A comprehensive discussion on machine learning results, including strategies for improving accuracy, will be presented. The methodology employed on the IBM Watson platform will be detailed, emphasizing the advantages of utilizing cloud-based platforms for developing AI/ML based predictive models.
Speaker
Speaker biography is not available.

Evaluation of Inter-Process Communications in System-on-Chip Computers by FAST-DDS

Connor Wu (Marriotts Ridge High School & Johns Hopkins University Applied Physics Laboratory, USA)

0
System-on-chip (SOC) computers enable seamless Interprocess Communication (IPC), facilitating the Internet of Things (IoT) to exchange data across devices like smartphones, security systems, automotive systems, and digital cameras. This technology streamlines connections between applications, allowing efficient data exchange. Despite its advantages, occasional latency spikes within these systems can delay data reception. Consequently, evaluating IPC on SOC computers becomes crucial to understanding the correlation between the chosen transport mechanisms and latency values. In this poster, I present my findings on how the transport layer used affects latency. These latency values were collected by writing a C++ application with Fast-DDS as the networking library. A Python script using matplotlib generates a latency vs transport graph. The program would work by starting the subscriber. The subscriber reads a configuration value to determine the transport to use, frequency to start at, the amount to increment frequency, frequency to end at, and the number of samples to collect for each frequency. After the publisher finishes initializing, it would repeat the process of reading the configuration file. The publisher would send the number of samples located in the configuration file. After collecting the data, a Python script generates the graph to compare the transport layer used and their latency values. At the current stage of this project, publishers and subscribers can exchange data with one another. In the future, we plan to expand the application to receive information from various sensors. In the future, we plan to embark on further investigations aimed at Quality of Service Exploration. There are many Quality of Service parameters to configure within the Fast-DDS library to enhance or hinder reliability and improve reliability, which provides users more control over communication characteristics. We hope users will find it easy to extend this project and use it for real-time analytics.
Speaker
Speaker biography is not available.

Development of a Heatsink with embedded thermosyphons for Passive Cooling of High-Power LED Panels

Ayush Guha (Dublin High School, USA); Ayaan M Raza (Bellarmine College Preparatory, USA); Sohail Zaidi (San Jose State University, USA)

0
High-energy LED panels find diverse applications, ranging from indoor to space agriculture. While LED panels are generally efficient, they tend to produce significant heat, impacting their effectiveness and posing a risk of permanent damage. Active cooling methods, such as fans, not only consume excessive energy but are prone to failure, potentially reducing the panel's overall lifespan. This research explores a traditional passive cooling technique that integrates a heatsink with embedded thermosyphons operating at low pressure. The thermosyphons evaporate the fluid, which then condenses at the condenser end, releasing heat to the environment. The condensed liquid returns to the evaporator section due to gravity. In this study, an effort is made to combine these two passive techniques by designing a heat sink with embedded thermosyphons. To incorporate thermosyphons within each 10mm X 10mm fin, special design arrangements were implemented. A total of 144 rectangular pin-fins, each with a 3mm embedded hole, were attached to a vapor chamber filled with R134a refrigerant at a low pressure. At elevated temperatures, the fluid activates the thermosyphon process, effectively transferring heat away from the LED panel. The lower vapor chamber is sealed, and the LED panel is affixed beneath it. To enhance the heat conduction and minimize air pockets between surfaces, thermal paste is applied. The temperature data is collected using 16 k-type thermocouples attached to the tips and bases of 8 different pins around the heat sink. The LED panel is turned on, and the temperature readings are recorded through a multiplexer PCB connected to a Raspberry Pi. Initially, the temperature data for the LED panel surface was recorded with the cooling fans, which were later removed to establish baseline temperature data. Experimental results reveal that without cooling fans, the LED panel's surface temperature reached 120oC, while with the cooling fans, it reduced to approximately 40oC. The LED panel was attached to the bottom surface of the heat sink to record temperatures along with the heatsink and thermosyphons embedded fins. The data shows a percentage change along the fins ranging from 6.3% to 12.8%, depending on the fin's location along the periphery of the heatsink. Theoretical temperatures along the solid fins were modeled using MatLab, indicating that the difference between the top and bottom of these fins, and the experimental difference for the thermosyphon fins, was over 15 times lower than the theoretical values with solid pins. This, coupled with the temperature variation along the fins, suggests that the thermosyphon process within the vapor chamber was activated at higher temperatures. Efficient cooling is achieved by transferring heat from the base of the LED panel to the condenser section of the thermosyphon. The LED surface temperature with thermosyphons in operation measured around 42 degrees C closely aligning with the target temperature achieved with the cooling fans. These experiments were repeated for accuracy, and the comprehensive results will be presented at the upcoming conference.
Speaker
Speaker biography is not available.

STEM Approach to enhance Robot-Human interaction through AI Large Language Models and Reinforcement Learning

Siddhartha Shibi (Washington High School & Intelliscience Training Institute, USA); Sohail Zaidi (San Jose State University, USA)

1
Humanoid Robots with their limitless capabilities have revolutionized the world. Their applications range from household assistance to advertising. As these technologies age however, the use of their sensors, motors, cameras, all become outdated; making previous humanoids a thing of the past. This project takes a STEM approach towards enhancing these robots by tackling the most crucial issue that such humanoids face; their adequacy in human-robot interactions. This study explores the promise of integrating LLMs (Large Language Models)—such as Google PaLM2 and ChatGPT—to supplement the capabilities of such robots; as well as bringing CoT (Chain of Thought) . The subject of this project is the humanoid robot Pepper, by Softbank Robotics, a popular robot designed to interact with humans; however, due to its weak natural language processing (NLP) capabilities, it struggles to adequately articulate responses in human-robot conversation. For instance, the robot was capable of easily listing responses to simple questions such as, "What is your name?", or "What are you", yet struggled with providing adequate responses to queries such as, "Who is the president of the United States", or "When is the next World Cup?". By AI/ML LLM integration, such questions were handled by using much improved LLMs in place of the previous built-in responses the robot had. This demonstration has been shown in the video that is uploaded at: https://youtu.be/hF7aRlQmnqs?feature=shared. Our approach targeted the main weak points of the robot; it's ability to provide responses to asked questions, and remembering prior questions/conversation. By intercepting the robot's own NLP Dialog Module, the asked prompt can be connected through a chatAdapter, bringing conversations to a chat database for context as well as LLM of choice. This approach, implemented through use of Android Studio to create an appropriate application for their procedure, addresses the contextual-based reasoning by pulling from the chat database as well as provides adequate responses limited only by the AI/ML model of choice. This project involved integrating ChatGPT/PaLM2 into Pepper's existing system to enable generation of more natural and engaging responses. In addition to the pre-existing development in bringing artificial intelligence into these humanoid robots, further work has been in the process; the aim being to develop a way for the robot to simultaneously extract other situational data from conversation such as facial and tonal expressions, bringing human feedback in order for responses to be further fine-tuned. Aside from the work-in-progress development of integrating RLHF (Reinforcement Learning with Human Feedback), the effectiveness of the aforementioned approach was further evaluated through a user study, comparing it with and without integration. The results indicated that integrating LLM/s into the robot's NLP system significantly improved its ability to generate more coherent responses, leading to more natural human-robot interactions. Overall, this presentation will demonstrate the potential of using LLMs to enhance the NLP capability of human robots like Pepper. It's believed that the proposed approach can pave the way for developing more intelligent human-robot interactions in the future.
Speaker
Speaker biography is not available.

Integrating Machine Learning Techniques to Improve Pneumonia Diagnostics by Analyzing Chest X-ray Scans

Manasvi Pinnaka (IntelliScience Institute, USA); Sohail Zaidi (San Jose State University, USA)

0
Pneumonia is a respiratory infection that causes over a million hospitalizations and 50,000 deaths every year, making it the fourth most common cause of mortality overall. Pneumonia diagnostics are complicated as physicians need to rely first on chest X-rays, which are followed by other clinical tests including those based on blood and sputum samples to confirm pneumonia. The recent COVID-19 pandemic has only increased the number of cases of this disease with the virus attacking airways and gas exchange regions of the lungs, leading to these prominent respiratory infections. Now, large amounts of data are available that can aid with the diagnostic capabilities for this disease. Since this enormous quantity of data can only be efficiently evaluated with the use of computers and statistical techniques, automation of the diagnostic process for pneumonia is extremely beneficial. Artificial intelligence has provided us with the ability to transition from traditional diagnostic tools to a more machine-driven version that can significantly improve diagnoses of pneumonia in terms of cost, time, and accuracy. Different radiologists can interpret chest X-rays in different ways which makes this diagnostic method extremely subjective. The issue of subjectivity emerges in cases where advanced machine learning techniques are employed to develop predictive models that are based on chest X-ray examinations. The objective of the current work is to explore the impact of subjectivity on the accuracies of these machine learning models. The chest X-ray images were obtained from the RSNA International COVID-19 Open Radiology Database (RICORD). This database consisted of approximately 1,000 chest X-rays from 361 patients at least 18 years of age who tested positive for COVID-19. Each X-ray image was evaluated by three radiologists based on appearance (typical, indeterminate, atypical, or negative for pneumonia) and airspace disease grading (mild, moderate, or severe). In the current work, the convolutional neural network (CNN) algorithm was employed on four different variations of the dataset described above - the diagnoses of radiologist #1, radiologist #2, and radiologist #3 as well as a three-timed-duplicated set including each of the three diagnoses based on a single chest X-ray scan as a separate entry. The same CNN model achieved training accuracies of 43.71%, 20.54%, 20.56%, and 27.83% and testing accuracies of 44.39%, 18.93%, 20.00%, and 27.93% respectively. As expected, the impact of subjectivity can be identified in terms of low model accuracies. Poor to moderate model performance across all four classification tasks indicates the problem that non-objective evaluations of chest X-rays, specifically variations in the diagnostic analysis of ten similar scenarios, play in medical decisions. Machine learning has to be integrated with doctors'/radiologists' opinions, which vary based on their expertise and experience-based perspective, for the optimal balance between accuracy and efficiency in health-based assessments of COVID-19 pneumonia. The full set of results and their interpretation will be included in the final presentation.
Speaker
Speaker biography is not available.

Adaptability of IBM Watson Cloud Platform to Develop Machine Learning Models for Predicting Students' Academic Stress

Syed M Kazmi (Rutgers University, USA); Alisha Kazmi (Notre Dame San Jose, USA); Anvikh Arava (John Champe, USA)

0
In recent years, machine learning (ML) has undergone a significant transformation, largely driven by the challenges inherent in traditional model development methods. These approaches, often dependent on expert knowledge in programming languages, algorithms, and statistical techniques, are time-consuming and demand a high level of skill to effectively manipulate parametric variations and their impact on model accuracies. This study offers a comprehensive analysis of the adaptability of the IBM Watson Cloud Platform in developing ML models, addressing many of these challenges. Machine learning, a prime example of the STEM approach, involves training algorithms to learn and make predictions or decisions from data. Traditionally complex and skill-intensive, this process is simplified through AI platforms like IBM Watson. Our research explores the functionality of the IBM platform, emphasizing its flexibility in providing various split ratio variations, algorithm choices, and K-fold variations, and how these features influence model performance. To assess the platform's efficacy, we conducted a case study analyzing academic stress among students. Data was collected from two primary sources. The first set of data was obtained from a university in Pakistan immediately after the COVID peak by distributing a questionnaire among students. The aim was to gather information on various relevant parameters grouped into four sections: "General Information", "Perceived Stress Scale", "Cognitive Assessment", and "Social Dependency". The Watson ML platform was used to develop a model under the "supervised learning" option, incorporating various algorithms including Extra Trees Classifier and Random Forest Classifier. The machine proposed two best algorithms including Random Forest Classifier that gave an accuracy of 66.4% in which feature enhancements such as hyperparameter Optimization and feature engineering. Results indicate that among all impacting parameters, cognitive performance, self-study hours, and the number of class absentees played a dominant role in predicting a student's average score. The impact of parametric variations like split ratios and K-fold distributions was also examined, showing that the model accuracies could be optimized for the highest values by carefully selecting the split ratio with an associated value of k. Study 1 research is further expanding to analyze more data on students' academic performance. New data under investigation is borrowed from Kaggle using passive and automatic sensing data from the phones of a class of 48 Dartmouth students over a 10-week term to assess their mental health (depression, loneliness, stress), academic performance (term GPA and cumulative GPA) and behavioral trends (sleep, visits to the gym). This data is currently being analyzed with new models indicating high accuracy, and results are being compared with the published papers on this data. The final results will be presented at the upcoming conference. In the final presentation, it will be argued that the IBM Watson Cloud Platform is a robust tool that simplifies machine learning model development, making it more accessible and less reliant on deep technical expertise.
Speaker
Speaker biography is not available.

Robot Motion Planning with Complementarity Constraints: When is it easy?

Ishita Banerjee (USA); Nilanjan Chakraborty (Stony Brook University, USA)

1
This research is on robotics motion planning, where the goal is to find a path for a robot from a start to a goal configuration without hitting obstacles in the environment. An instance of a robot motion planning problem consists of a geometric model of an environment with obstacles, a model of a robot, and its initial and goal configurations. Computationally, robot motion planning is known to be NP-hard (more accurately, PSPACE-hard), which means that there are instances of the motion planning problem where it is computationally very expensive to compute a feasible or collision-free path, even if there exists one. Practically, this means that there are motion planning problems that are unsolvable in a reasonable time. The purpose of my research project is to understand a related question: Can we characterize the set of motion planning instances where the motion planning problem is solvable in polynomial time? Understanding this question will help us devise more reliable robotic systems and help us understand the performance of robotic systems in certain deployed scenarios such as in a home environment. It may also allow the robot to reason about its environment and understand how some of the obstacles may be rearranged, if possible, to obtain a feasible motion plan. The question above is quite challenging since the question is also related to the underlying motion planning algorithm that is being used. Within the context of this overarching problem, my goal is to understand the above question for point holonomic robots moving in a 2D or 3D environment. Up to now, I have considered the obstacles to be circular non-overlapping obstacles. We can prove that in this environment all motion planning problems are easy, i.e., it is possible to solve the motion planning problem in polynomial time. The computational model of this problem was created using a discrete-time kinematic motion model of the robot and position-level complementarity constraint. The collision model was created for this project at the kinematic level using a complementarity constraint. For collision avoidance, we applied a velocity to the robot to bring the normal component of the robot's velocity to zero based on the complementarity constraint for collision avoidance. The environment creation and the simulation of this movement of the robot using the mathematical model and complementarity constraint has been done in Python where it has been proved that the model works for any complex environment with non-overlapping circular obstacles. After proving our theory with circular obstacles in the 2D environment the same implementation was extended to prove our model in a 3D environment with Spherical obstacles. In our future work we plan to study the problem of characterizing computationally efficient motion planning instances using polygonal obstacles.
Speaker
Speaker biography is not available.

Photoredox-Catalyzed SH2 Cross-Coupling of Alkyl Chlorides Via Silyl-Radical Mediated Chlorine Atom Abstraction

Ashlena M Brown (Princeton University Laboratory Learning Program); Andria L Pace (Princeton University, USA); David W.C. MacMillan (Principal Investigator, USA)

0
C(sp3)–Cl bond activation has incredible potential to be used in the formation of C(sp3)–C(sp3)-rich compounds, which are highly desirable in the pharmaceutical field. However, cross-coupling of alkyl chlorides to produce C(sp3)–C(sp3) bonds has not yet been achieved due to the inherent limitations of the C(sp3)–Cl bond. Despite this, alkyl chloride starting materials are commercially abundant and accessible. Thus, being able to generate radicals from alkyl chlorides that form quaternary products has a great possibility to impact organic reactions and drug synthesis. In this paper, the bimolecular homolytic substitution reaction (SH2) between primary and tertiary alkyl chlorides is proposed, key bond formations are shown, and yields are listed. BTMG, Fe(OEP)Cl, [Ir(F(Me)ppy)2dtbbpy]PF6, and (TMS)3SiNHAdm were used alongside various primary chlorides and tertiary chlorides in a photoreactor using blue light. Data was analyzed using UPLC, NMR, and liquid chromatography. The highest yield of desired cross-coupled product was at 67% where benzyl chloride was the limiting reagent. The reaction was also achieved using other primary chlorides, and the reaction scope and optimization have significant potential to be further researched.
Speaker
Speaker biography is not available.

Plasma-Water Interaction: Measuring RONS to Investigate the Plasma-Wound Interaction Process

Sharon Mathew (Archbishop Mitty High School & San Jose State University, USA); Sonya Sar (BASIS Independent Silicon Valley, USA); Sohail Zaidi (San Jose State University, USA)

0
In this study, the plasma-water interaction phenomenon was investigated. Non-equilibrium plasma is a state of plasma where the electrons are much hotter than the heavier ions and neutral atoms. Despite the high energy of the electrons, the overall temperature of the plasma remains relatively low, near to room temperature. This unique characteristic enables the use of non-equilibrium plasma in sensitive medical applications, such as wound healing and sterilization, benefitting millions of patients. However, the interaction of plasma with wounds is complex, involving chemical reactions between plasma radicals and water present in the wound, and necessitates further understanding. When the plasma jet, entering atmospheric air, interacts with water in a wound, it generates Reactive Oxygen and Nitrogen Species (RONS), crucial for wound healing. To optimize this process, it is important to investigate how different RONS vary under different plasma exposure conditions. This study aims to measure the RONS concentration generated by plasma in water. Experiments were conducted on plasma-water interaction, analyzing water samples with and without plasma exposure using a spectrophotometer (Shimadzu, 1900 Series). For this purpose, a special experimental rig was designed and an experimental setup was created. A Dielectric Barrier Discharge (DBD) plasma torch, operating at 10-12 kV/30-40 kHz with helium at 10 SLPM was employed to generate a plasma jet measuring about 20-30 mm in length. The input power was measured with two 1000:1 voltage probes, ranged from 10 mW and 20 mW, depending on the operating conditions. Special arrangements allowed controlled exposure of DI water to the incident plasma. In addition, the plasma exposure time for all samples was precisely regulated. Initial experiments revealed that a 30-minute exposure reduced the water's pH value by 54%, indicating acidity and the formation of RONS in the plasma-activated water (PAW). Additionally, a notable 220% increase in absorption peak was observed as the exposure duration was increased from 5 to 10 minutes suggesting higher concentrations of RONS. The current study is progressing to explore how varying plasma exposure times affect absorption curves obtained in spectroscopy. To quantify the concentration of various molecular species, calibration curves are being established using standard sets of samples for individual species, including NO3- and NO2-. Preliminary results have been obtained and are undergoing reconfirmation and analysis. Further findings will be presented at the upcoming conference.
Speaker
Speaker biography is not available.

Analyzing DBD Plasma under Varied Operating Conditions: Implications in Accelerated Wound Healing

Srida Aliminati and Aryan Tummala (BASIS Independent Silicon Valley, USA); Sohail Zaidi (San Jose State University, USA)

0
Wound healing process is hindered by deprivation of oxygen at the wound site. Few non-intrusive therapeutic techniques are available that include hyperbaric oxygen therapy (HBOT) and Topical Oxygen Therapy (TOT). In both cases, patients are exposed to oxygen to elevate the oxygen level at the wound site. In recent years, Dielectric barrier discharge (DBD) plasma techniques have emerged as an effective non-intrusive therapy for accelerated wound healing. Recent studies show that plasma contains reactive oxygen and nitrogen species that may help the wound healing process by means of microcirculation and oxygenated hemoglobin. While underscoring the pivotal role of oxygen and its associated radicals in accelerating all phases of wound healing, several limitations have become apparent. It has been demonstrated that only an optimal amount of oxygen is crucial for an efficient healing process, as both hypoxia and hyperoxia will impede the healing trajectory. In maintaining the delicate balance, controlled manipulation of oxygen radicals is essential, necessitating additional studies to provide a quantitative understanding. In this work we are investigating how small addition of oxygen can impact the species in the plasma. The monitoring of these species will assist us to optimize the required oxygen concentrations in the plasma exposing the wound surface. It is being achieved by looking at the emission spectrum of the plasma, observing the relative changes in various plasma emission lines at various plasma operating conditions and at various oxygen amounts added to the main plasma flow. An Ocean Optics (HR4000CG-UV-NIR) spectrometer was used to capture the emission spectrum. When introducing oxygen gas to helium plasma at various concentrations and voltages, distinct variations in the emission spectrum became apparent. In the absence of oxygen, prominent atomic helium lines at 706 nm, 655 nm, 667 nm, and 727 nm were observed. Additionally, a few nitrogen lines were observed, potentially originating from atmospheric air entrained into the plasma jet. The addition of oxygen introduced two prominent oxygen lines (776 nm and 844 nm) into the spectrum, leading to a notable decrease in the atomic helium lines. The addition of nitrogen, on the other hand, led to the appearance of prominent nitrogen lines, predominantly in the second positive nitrogen system. This study examines changes in the helium emission spectrum based at various flow rates of added nitrogen and oxygen. In each case several plasma input voltages ranging from 7kV to 13 kV (40-50 kHz) were employed to assess their impact on plasma characteristics. To investigate the influence of added oxygen on the bacteria (E. Coli), bacterial colonies were exposed to plasma both with and without oxygen. The colonies were subsequently counted in each case. A notable reduction in bacterial colonies was observed when oxygen was included in the helium plasma. The poster will provide comprehensive details regarding the experimental hardware and software utilized in this study. Additionally, it will summarize experimental results related to bacteria.
Speaker
Speaker biography is not available.

Setting up an Economical Testing Facility for Genome Sequencing of Chrysaora plocamia and Human Saliva

Deshna Shekar (Evergreen Valley High School, USA); Indeever Madireddy (USA); Prasun Datta (Tulane University, USA); Sohail Zaidi (San Jose State University, USA)

0
The process of genome sequence has become an important way to identify an organism's biology. Analyzing organisms' genomes provides key insight into understanding genetic information and variation between organisms, as well as the heritability of mental and physical illnesses in animals and humans. Over the last decade, genome sequencing has become significantly more practical to perform, especially with the development of third-generation sequencing technology and new techniques in gene analysis. The advent of Nanopore technology, with long-read sequencing and real-time analysis of data has made sequencing more cost efficient and feasible. Intelliscience Institute, in collaboration with San Jose State University, has set up a fully furnished and economical laboratory capable of sequencing genomes. Recently, we successfully sequenced the genome of the Chrysaora plocamia, the South American Sea Nettle Jellyfish. The objective of this work was to sequence a novel marine organism and establish an affordable research laboratory capable of exploring genomics. Jellyfish are essential in marine ecosystems and the study of their genomes can reveal new medicinal, evolutionary, and ecological information. Using Nanopore technology and equipment such as a MinION Mk1B sequencer, thermal cycler, and spectrophotometer, we assembled a high-quality and highly contiguous genome for Chrysaora plocamia. A total of 2.9 million reads totaling 7.3 GB of sequencing data was collected from a single R10.4.1 flow cell, providing 34x coverage of the jellyfish's haploid genome. Additionally, annotation of the genome using online databases of known venom genes helped us identify 112 putative venom genes that have diverse toxin function, which could have potential medicinal use in the future. This research is still in progress and recent results are being analyzed. In our current project, we are investigating human saliva. Human saliva contains proteins and enzymes other than water, which are essential for the maintenance of oral hygiene. In addition, saliva also contains diverse microbial species that maintain gum and oral health. Poor oral hygiene can lead to changes in oral microbiome, leading to the growth of bad bacteria that can promote oral cavities and plaque deposition. Poor oral health is directly associated with an increased risk of systemic disease, such as diabetes and obesity. Recent studies revealed that saliva is highly enriched with human DNA but non-human contaminating DNA can confound whole genome sequencing results. Current study is investigating this limitation and is also evaluating the saliva collecting methods that may improve the genome sequencing results. Further details of this research along with the important experimental steps involved in saliva genome sequencing will be included in the final presentation. Our poster will also include the details on the development of the genome lab and various protocols that were developed in our two projects described above.
Speaker
Speaker biography is not available.

Using Artificial Intelligence (AI) and Machine Learning (ML) for Predicting Credit Card Approvals

Lori D Coombs (NASA & WWCM, USA); Layla M Coombs, Victoria G Coombs and Amanda J Coombs (Home Instruction, USA)

0
AUTHORS: Lori D. Coombs, Layla Coombs, Victoria Coombs, & Amanda Coombs. Our Advisor Associate Professor Lori D. Coombs, MBA, MSE. Our project is sponsored by a Director of WWCM Academy, Don B. Coombs, MBA. Our goal is to build a predicting credit card approvals system to help lenders. The team incorporates steps to design, analyze data, build a predictive model, test & deploy. From a cybersecurity perspective, the team will pay attention to the data concerns in AI and ML with respect to training AI. This project aligns with the NIST's framework to conduct research to advance trustworthy AI technologies and understand their capabilities and limitations. The results will help the team better understand the predictive analysis process and support future opportunities for similar projects. INTRO: Our team chose to research how artificial AI and ML can be used to predict credit card approvals in an efficient manner. Project start-up involves deciding which programming application to use and obtaining a large data set to analyze. At the end of the project, we aim to be able to synthesize and train open-source code, loan data, and computational output to render credit card approval predictions. BACKGROUND: Our goal is to develop secure code to support lenders with the credit card approval process. Our Advisor is tasked to provide guidance with computer programming efforts and developing an effective research methodology. PROCESS: Our team will explore data, clean data, model, and perform analysis to support model deployment. RESULTS: Our team will use results as a baseline for use with other predictive analysis projects. FUTURE WORK: To carry this project to the next level, we aim to complete task of deploying a predictive model. Once deployment occurs, the team will understand where design improvements can be made.
Speaker
Speaker biography is not available.

Advancing Bacterial Mitigation on Hospital Floors: A STEM-Centric Exploration

Keerthana Dandamudi and Rachana Dandamudi (Lynbrook High School, USA); Sohail Zaidi (San Jose State University, USA)

0
Hospital floors are commonly laden with bacteria, acting as a major source for the spreading and transmission of viruses and diseases. The prevalent use of chemical solutions for bacterial mitigation poses risk to both patients and the environment. Our project aims to address this issue through a stem-based approach, integrating principles of physics, chemistry, technology, and engineering. We propose the use of plasma exposure to inhibit bacterial growth. To operationalize this technique, we designed and developed a special robot with specific parameters: a net weight of approximately 80 lbs, a maximum floor slope of 5 degrees, an operating speed of around 440 ft/min, a topping accuracy ~0.5 in, and a safety Factor of 1.5. The robot design features a heavy small-size gas cylinder, a microprocessor, plasma torch stands, and gas distribution and flow meters, along with a power supply and ballet resistors for operating the plasma torches. We conducted torque calculations to ensure the effective robot operation. For control purposes, the robot was equipped with multiple controllers: the TETRIS PRISM robotics controller, the MAX DC motor expansion controller, a PS4 controller, and a Tele Op control module enabling remote operation. The robot, maneuverable via a joystick, is capable of moving forwards, backwards, and sideways, which is essential for scanning the floor while the plasma torches are active. This robot systematically carries the plasma torches across the floor, subjecting the bacteria to a potent plasma jet and effectively bacterial presence. We utilized a Dielectric Barrier Discharge (DBD) plasma torch, innovatively mounted on the robot for autonomous scanning. In our experiment, the DBD plasma, generated by applying high voltages (~10kV, 40-50kHz) to gasses like helium or argon, was expelled as a jet or sheet, contingent on specific application. For experimental validation, standard hospital tiles were inoculated with E. Coli bacterial colonies and cultivated for 24 hours. Post-exposure to the plasma, an online app was used to count the bacterial colonies, observing a marked reduction on the treated tiles compared to the control group. Upon contact with the plasma, the reactive nitrogen and oxygen species crucially contributed to the destruction of bacterial colonies by damaging the bacteria's proteins, lipids, and DNA. Our presentation will summarize our exploration into bacterial mitigation and detail how we implemented a STEM-driven solution, employing plasma technology and innovatively designed hardware, to combat this critical health care challenge.
Speaker
Speaker biography is not available.

Protocol Verification to Extract Flavonoid Content from Various Coffee Species

Ashna Zavery (Crystal Springs Uplands School, USA); Sumanth Mahalingam (Evergreen Valley High School, USA); Sohail Zaidi (San Jose State University, USA)

0
This work is an extension of our ongoing research of flavonoids and their extraction from various coffee species. The extraction of flavonoids is important because these flavonoids are useful in sequestering reactive-oxygen-species, as well as in therapies for cancer, Alzheimer's, and other diseases. They also contain neuroprotective and cardio-protective effects. A protocol was developed for this study, to extract flavonoids in the 1st phase of the experiment. The current work in the 2nd phase of the experiment is to revise, verify, and upgrade the extraction protocol. In this study, flavonoid content levels and antioxidant capacity was explored across three different coffee bean species - specifically, the Coffea arabica, Coffea liberica, and Coffea canephora (Robusta) species. The filtered extracts of each coffee species were collected using hydroethanolic solvents and water-bath extraction, to maximize bioactive compound yield from each species. Thereafter, the Total Flavonoid Content colorimetric assays were utilized to characterize flavonoid content for each species while DPPH• (2,2-diphenyl-1-picrylhydrazyl) colorimetric assays were utilized to characterize antioxidant capacity for each species. The differences between each species' flavonoid content and antioxidant capacity were analyzed using UV-Visible spectroscopy. The absorbance values for the Total Flavonoid Content Assay were compared against a calibration curve made from (+)-Catechin, while the DPPH values were compared against a control to find inhibition percentages. Analysis of the data revealed that Robusta coffee beans contained significantly higher levels of total flavonoid content in mg of Catechin/mL, compared to the Arabica/Liberica beans. Moreover, the DPPH assay revealed that Robusta coffee maintained higher inhibition of the DPPH radical, indicating a higher antioxidant capacity. The protocol for the 2nd phase of the experiment was the same as the protocol for the 1st phase. However, the protocol of the 2nd phase called for bigger solutions of catechin for less inaccuracies. Other than that, the protocol verification was completed without any significant changes. The phase 2 results are under progress and will be presented in the upcoming conference.
Speaker
Speaker biography is not available.

Preliminary Results from Integrating Chatbots and Low-Code AI in Computer Science Coursework

Yulia Kumar, Anjana Manikandan, Jenny Li and Patricia Morreale (Kean University, USA)

0
This study investigates the application of chatbots and low-code AI tools in advancing Computer Science (CS) education, with a focus on the CS AI Explorations course and the AI for ALL extracurricular program. It addresses two main research questions: Firstly, the impact of chatbots on student growth and engagement in undergraduate research, and secondly, the potential of low-code AI platforms in bridging the gap between theoretical and practical AI skills. Conducted during the 2022-2024 academic years, this research presents a combination of case studies and empirical data to evaluate the effectiveness of integrating these technologies into conventional teaching methodologies. The preliminary findings indicate a significant transformative potential of chatbots and low-code AI, offering valuable insights for future educational strategies and the creation of more dynamic, interactive learning environments. To be precise, students' involvement in research was significantly increased. Future investigations will clarify the long-term effect of the chatbots and low-code AI integration.
Speaker
Speaker biography is not available.

Evaluating Edge and Cloud Computing for Automation in Agriculture

Alberto Najera (University Heights High School, USA); Harkirat Singh (Francis Lewis High School, USA); Chandra Shekhar Pandey, Fatih Berkay Sarpkaya and Fraida Fund (NYU Tandon School of Engineering, USA); Shivendra Panwar (New York University & Tandon School of Engineering, USA)

0
Thanks to advancements in wireless networks, robotics, and artificial intelligence, future manufacturing and agriculture processes may be capable of producing more output with lower costs through automation. With ultra fast 5G mmWave wireless networks, data can be transferred to and from servers within a few milliseconds for real-time control loops, while robotics and artificial intelligence can allow robots to work alongside humans in factory and agriculture environments. One important consideration for these applications is whether the "intelligence" that processes data from the environment and decides how to react should be located directly on the robotic device that interacts with the environment - a scenario called "edge computing" - or whether it should be located on more powerful centralized servers that communicate with the robotic device over a network - "cloud computing". For applications that require a fast response time, such as a robot that is moving and reacting to an agricultural environment in real time, there are two important tradeoffs to consider. On the one hand, the processor on the edge device is likely not as powerful as the cloud server, and may take longer to generate the result. On the other hand, cloud computing requires both the input data and the response to traverse a network, which adds some delay that may cancel out the faster processing time of the cloud server. Even with ultra-fast 5G mmWave wireless links, the frequent blockages that are characteristic of this band can still add delay. To explore this issue, we run a series of experiments on the Chameleon testbed emulating both the edge and cloud scenarios under various conditions, including different types of hardware acceleration at the edge and the cloud, and different types of network configurations between the edge device and the cloud. These experiments will inform future use of these technologies and serve as a jumping off point for further research.
Speaker
Speaker biography is not available.

Understanding Solar Weather

Lillian Wu, Isabella Vitale and Cecilia Merrill (Glen Ridge High School, USA); Corina S Drozdowski (Glen Ridge High School & Montclair State University, USA); Katherine Herbert (Montclair State University, USA); Thomas J Marlowe (Seton Hall University, USA)

2
Solar Weather is a challenge impacting multiple areas of our lives: telecommunications and computing, climate, and human space activities. This can pose a threat to much of our infrastructure, ranging from immediate effects on GPS systems, satellite communication, and aircraft communication to a larger scale. Better understanding and prediction of these complex phenomena can help to limit these impacts. This poster reports on an independent study investigation extending a class study of solar weather in studying programming, sensor networks, data analysis, artificial intelligence, and machine learning. Our vision for this project is to gain a better understanding of solar weather data, and to look for patterns in that data. Our goal has been to codify the structure for analyzing solar weather data and to create an application to do so. Two specific areas we are investigating involve using a low-cost microcontroller to simulate a satellite and running a predictive algorithm to forecast future solar weather cycles. Future work will investigate the accuracy of our model by performing hold-back analyses, predicting the results of a past cycle or pair of cycles (which we will omit) based on the remaining data.
Speaker
Speaker biography is not available.

Swarm Robotics: Preliminary STEM-Based Activity to Investigate Swarm Robotic Systems

Pranav R Bellannagari (IntelliScience Institute & San Jose State University, USA); Arnav Biruduraju (Mission San Jose High School, USA); Shreeya Ravali Kurapati (Stratford Preparatory School, USA); Sujith R. Thalamati and Suraj R. Thalamati (Quimby Oak Middle School, USA); Sanjana Venkatesh (University Preparatory Academy, USA); Faizi R. Zaidi (Edna Hill Middle School, USA)

0
Swarm robotics, inspired by animal swarms' coordinated movement, seeks to replicate this movement through computational modeling and various experiments. In the current project, a team of middle school teenagers are collaborating to investigate the first steps to imbue intelligence in their robots to communicate with each other to prevent collisions during their collective movements. The team opts for a Homogeneous model where all robots are similar, and adopts a Reactive Architecture (Robots sense and react) for their robotic systems. Using a Lego Mindstorms EV3 system, multiple identical robotic buggies were constructed, each equipped with a EV3 Brick microcomputer. The objective: to facilitate the collective movement of the buggies in a swarm-like fashion from one point to another, relying solely on the ultrasonic sensors to prevent collisions. Programming involves directing the robot to move forward until the Ultrasonic Sensor detects an obstacle closer than a specified distance, prompting the robot to deviate and continue in a different direction. The experimental steps include the following: Develop robotic buggies and program the EV3 to control the motion of the buggies moving in the specified direction. Mount ultrasonic sensors to detect obstacles and activate the maneuvering system to prevent collision. Buggies were placed in a circle to observe the number of collisions in a given time. Record the collective motion of the buggies and digitize it to track the motion of individual robots by counting the number of collisions and plotting them as a function of time. Make a track with straight and slightly curved sections and let the robots move together towards the end of the path. Track the movement of each robot and characterize its motion. Qualitative measurements, along with visual data, will be presented in a final poster, showcasing the development, programing, and initial stages of swarm behavior in the robots.
Speaker
Speaker biography is not available.

Making EZIE Go Viral

Ashna Uprety (The Johns Hopkins University - Applied Physics Laboratory, USA); Ella Spirtas (Johns Hopkins Applied Physics Laboratory, USA)

0
We worked with NASA's EZIE (Electrojet Zeeman Imaging Explorer) Mission team at the Johns Hopkins University - Applied Physics Laboratory to create a collection of innovative products that connect STEM-uninterested students with the opportunity to become citizen scientists. Through the engagement fueled by these products, we hoped these students would expand their horizons, become inspired, and feel welcomed into the field of STEM. In order to achieve this goal, we have created a series of Instagram posts, a promotional poster, and a badging system all centered around the ideas of inspiration, inclusion, and curiosity. These products utilize bright colors, exciting graphics, hand drawn elements, and succinct, digestible wording to make the viewers feel more comfortable engaging with scientific content. By participating in the EZIE Mission, the users will understand why their individual contributions are important. Moreover, we created a social media campaign that humanizes the mission by highlighting SMEs (Subject Matter Experts). We gathered information through personal Q&A sessions with EZIE-Mag team members. These conversations put the viewer in the shoes of a NASA official. They are able to find answers to questions such as; why should I be excited? What is so important about this mission? How could it impact a community like mine? This sentiment of curiosity is built upon by the poster which features a design competition for students to create their own school logo, which will be displayed on the official EZIE Mission data tracking website. This creates a feeling of importance surrounding the student's work; they are able to put their personal mark on official scientific data that will have a real impact on our understanding of the world around us. The poster also offers opportunity for interaction through the designated spaces for the badges. The badging system sustains the excitement of engaging with the kits. It keeps these students committed to their work by providing rewards for consistent engagement over time, allowing them to continuously feel inspired to participate in this mission, as well as other STEM ventures, well into the future.
Speaker
Speaker biography is not available.

Sensors Application and Data Acquisition in Characterization of a Bifacial Solar Panel

Omkar Anand (Cupertino High School, USA); Akhil Manikandan (American High School, USA); Sohail Zaidi (San Jose State University, USA)

1
Solar energy has evolved into a well-established energy source, now widely utilized worldwide. Improved and more efficient solar panel designs are readily available and are currently being installed for both domestic and industrial applications. One crucial aspect of solar panel assessment involves employing multiple sensors and data acquisition techniques to monitor their in-situ performance. Any significant increase in the panel's surface temperature can lead to a deterioration in its efficiency. To monitor surface temperatures, thermocouples and multimeters are utilized for direct measurement of temperature, voltage, and current. These voltage and current measurements are pivotal in developing the IV characteristics of the panel, enabling the monitoring of output power as a function of surface temperature and incident thermal radiation flux. However, continuous monitoring of voltage, current, and temperature poses significant challenges. In our current research, we are utilizing a unique solar panel known as the bifacial JJN Solar 200-W Bifacial PV panel. Bifacial solar panels represent a recent technological advancement, characterized by their enhanced power output, achieved through the absorption of solar radiance from the back layer, leveraging scattered light from the ground. A 5% to 10% increase in the panel's output power has been observed with bifacial solar panels. One strategy to further boost output is by enhancing the back reflection of the panel. Before embarking on experiments to improve the back reflection and overall efficiency of the panel, thorough characterization of the panel is essential. For this purpose, k-type thermocouples are employed. Our in-house built temperature data acquisition system comprises 16 channels and is compatible with a custom-built PCB board connected to a Raspberry Pi. The PCB board, designed and manufactured in San Jose, along with locally developed software, facilitates data acquisition and storage in an Excel sheet. Voltage and current measurements across a load are obtained using a Keysight multimeter connected to a PC via an RS232 link. A Keysight software extracts voltage and current information, crucial for measuring output power over time. To plot IV characteristics, variable resistors are employed, and the same setup is used to record voltages and currents across various resistors. We are investigating the functionality of the bifacial panel by altering the ground underneath it to enhance floor reflection towards the panel's backside. Comprehensive characterization results, accompanied by multiple sensors and the associated locally built data acquisition system, will be presented at the conference.
Speaker
Speaker biography is not available.

Exploring Electrostatics through The Kelvin Water Dropper: A Dive into Electrostatic Efficiency and Exposure of advanced principles of physics to middle school and high school students

Om G Sharma (Princeton University EPICS Program, USA)

0
Research Objectives: (1) Researching the operational mechanisms and principles behind the functionality of the Kelvin Water Dropper ; (2) Making the complex scientific principles functioning in the Kelvin Water Dropper easily comprehensible for middle school and high school students; (3) Exploring modifications of the Kelvin Water Dropper design, aiming to optimize its performance and efficiency for possible applications in engineering and science. Target Audience: Engineers, Scientists, and Students (Particularly those in middle school and high school) Workshop/ Poster Board Overview: This poster presentation delves into research and project development aimed at learning more about the principles behind the Kelvin Water Dropper's operation. Through studying electrostatics and edicurrents, I seek to understand the fundamental principles driving the Kelvin Water Dropper's functionality. To make these complex scientific principles easily understandable to middle and high school students, I will employ innovative approaches, including interactive explanations and an engaging, easily comprehensible video. Furthermore, I will explore modifications to the Kelvin Water Dropper design, aiming to optimize its performance and efficiency by comparing and contrasting my data from the months I have worked on this project, as well as data I may obtain in the future. Through this research endeavor, I strive to inspire further scientific curiosity in students of all ages and gain a better, and more comprehensible understanding of the principles behind the functionality of the kelvin water dropper.
Speaker
Speaker biography is not available.

A Comparison of Steganography Exploits

Riya Madaan (The Johns Hopkins University Applied Physics Laboratory); Kristina K Zudock and Nicole L Brown (The Johns Hopkins University Applied Physics Laboratory, USA)

1
Steganography has been used in recent years as part of cyber security initiatives using DNA sequences and images as regular cover data for sensitive information. This project experimentally compared the performance of DNA steganography and image steganography. DNA steganography was implemented using a Python algorithm which took the sensitive textual information that the user was attempting to hide and converted it into the four DNA bases using a specific key. The algorithm then broke the converted text into smaller fragments and inserted them in random locations within a SARS-CoV-2 genome sequence. Image steganography was implemented using a least significant bits method which swapped the most significant bits of each pixel in the secret image into the position of the least significant bits of each pixel in the cover image. Both techniques had an information retrieval function as well. Each method's performance was judged by separate criteria considering that text and image data types have different metrics of detectability. DNA steganography's performance was evaluated using the National Center for Biotechnology Information's BLAST search to determine how biologically similar the modified sequence was to the original, which was shown through its genomic similarity percentage (GSP) and length deviation percentage (LDP). Three trials with textual information of varying character counts (45, 30, 25) were conducted using the algorithm and BLAST search, resulting in an average GSP of 86.60% and an average LDP of 24.00%, indicating a largely undetectable method by human standards. Image steganography's performance was evaluated using an image similarity detector to determine how visibly similar the cover image looked to the secret image. Three trials, using three types of images (one depicting text, one that was visually overstimulating, and one showing a gradient), were conducted by putting the original and modified images into the image similarity detector. This resulted in an average photo similarity of 83.46% with minimal image quality loss, indicating a nearly undetectable change to the naked eye. The two methods proved their effectiveness separately, and when comparing them, they were extremely similar in performance (86.60% vs 83.46%). Neither drastically outperformed the other considering the pros and cons of each method, meaning that the more effective method is whichever is most suitable to the user's needs.
Speaker
Speaker biography is not available.

Spiking Neural Network Implementation for Real-Time DNA Classification

Harini Thiagarajan, Arpan De and Mp Anantram (University of Washington, USA)

0
DNA classification allows scientists to form conclusions in forensics, evolution, health disease diagnosis, and medical research. The Single Molecule Break Junction (SMBJ) identification method is an alternative to the traditional polymerase chain reaction (PCR) techniques towards DNA classification, creating unique signatures from single-molecule conductance measurements in DNA. Paired with automatic learning of strand-defining features from powerful machine learning models, the SMBJ process could successfully classify DNA strands through real-time analysis and sorting within DNA classes. Current progress toward a traditional classifier method in DNA classification is limited to additional preprocessing and the input of SMBJ large sample conductance histogram data. A promising neural network that is resource-efficient and computationally reliable in the neuromorphic computing field for true real-time analysis is a Spiking Neural Network (SNN), which mimics the action-potential and spiking of neural interactions in the brain. Compared to its predecessors in machine learning (CNN, RNN) that sample at regular intervals, SNNs only sample or "spike" (short burst of current) when a change in a signal occurs. Due to the more efficient, smaller overall level of spiking, SNNs are theorized to utilize fewer computer resources and memory. This study highlights an SNN implementation to classify DNA strands built on SMBJ conductance values in real-time, without additional histogram preprocessing. The preliminary SNN model correctly classifies 78.12% of the three different DNA datasets. With additional efficiency added through the possible development of neuromorphic hardware optimization, the development of a neuromorphic Spiking Neural Network solution for DNA classification is a promising step forward in the field of computational genomics.
Speaker
Speaker biography is not available.

Facilitating a Hands-On Approach to Open and Modular Engineering Projects through Software Design and Data Collection

Yuna Chun (Montgomery Blair High School & University of Maryland, USA); Yancy Diaz-Mercado (University of Maryland, USA)

0
Engineering and computer science education at the high school level is almost exclusively centered around closed-ended questions that have documented solutions. Though there are benefits to pursuing these types of problems, allowing students the opportunity to tackle open questions without known, closed-form solutions can aid in the development of essential skills, due to the exploratory nature of these efforts. While students may find hands-on, discovery-oriented experiences through extracurricular activities, such as Robotics clubs, they are rarely given the opportunity to apply knowledge in engineering and science to open problems, especially for more complex projects. Contributing to a modular component of such a project with a larger scope and collaborating within a group are both skills that are essential within engineering. Robotics is a discipline that combines research from many different fields of study; robots typically require perception and sensory outputs to enable automation and interactions with their physical environment. Recent advances in artificial intelligence have lead to robust computer vision models that facilitate robot perception; however, in order to function effectively, all of them need a sufficient volume of data to train from and work with. In this project, we demonstrate the application of elementary problem solving and programming to a larger interdisciplinary project, in addition to learning essential skills about working within the field of computer vision and control theory. We document the development of a point-by-point video annotation software application that generates a dataset for Tracking Any Point (TAP) class computer vision neural network models. The developed software can facilitate future research within computer vision and localization for control robotics, offering a more streamlined approach to data collection.
Speaker
Speaker biography is not available.

Using Robotics to Create a Cleaner Environment

Jeremy Chung (Johns Hopkins University Applied Physics Laboratory & Winston Churchill High School, USA); Alex J Zhang, Siju Onadipe, Kavya Shah, Varsha Makkapati, Pooja Dahiwadkar and Samuel Lee (Johns Hopkins Applied Physics Laboratory, USA)

1
Littering is a prevalent issue that disrupts natural environments, endangers wildlife, and stunts plant growth. Due to the scale of litter that is not properly discarded, autonomous trash collection can make current litter collection efforts more effective. In this project, we designed an autonomous robotic system for a Jackal robot to detect, retrieve, and deposit discarded cans. The approach involved AI image detection and inverse kinematics. The Jackal robot and its functions are compatible with Robot Operating System (ROS), facilitating communication between the Intel® RealSense™ camera and the inverse kinematics ROS topic. This allows the robotic arm to locate and grasp the Coca-Cola cans. We implemented object detection by building a Coca-Cola can image database in Roboflow and training a machine learning model for object detection in YOLOv5, which reached 89.9% accuracy. Many diverse environments and types of Coca-Cola cans were used in the YOLOv5 training data, allowing the model to detect many types of Coca-Cola cans in different backgrounds. The Coca-Cola can images were taken by the Intel® RealSense™ camera which has two sensors: the RGB camera and the depth sensor. The RGB camera measured the X and Y coordinates of the cans on a 2D plane, and the depth sensor found the Z coordinates to calculate the distance to the Coca-Cola Cans. The data was then published to a ROS topic, where an inverse kinematics script calculated the desired angles of the arm joints. The arm sits on top of a turret base plate that allows the arm to rotate in 5 degrees of freedom and can reach up to 40 cm away from the Jackal. There were many factors to consider while making the robot arm. It needed to lift and deposit Coca-Cola cans from the ground to the trash bin attached to the top of the Jackal. The cans could be grasped in any given orientation or state, such as upright or lying down, crushed or not crushed. Additionally, the arm segments needed to be long enough to reach the cans and dispose of them in the trash bin and light enough for the servo motors to withstand their weight. We first did static analysis to determine the worst-case torque calculation at each joint to identify the appropriate arm segment lengths. With dimensions established, 3D models of each part were designed using SolidWorks and printed using PLA and PETG filament. Due to manufacturing defects, the Jackal could not move around autonomously, and we were not able to implement and test autonomous search algorithms. However, we were able to have the YOLOv5 model, inverse kinematics script, and the robotics arm work together to detect and grab Coca-Cola cans given the arm's range of freedom and information from our ROS system.
Speaker
Speaker biography is not available.

The Impact of 5G Enablers on Telemedicine

Sahej S Batra (Johns Hopkins Applied Physics Lab, USA)

1
5G, the fifth generation of wireless cellular technology, offers remarkable advancements in upload and download speeds, connectivity, and capacity compared to previous networks. Telemedicine, the use of communication technologies for healthcare delivery across long distances, stands to greatly benefit from these advancements. However, despite its potential, telemedicine still faces issues today. These issues mainly include poor internet connectivity and poor audio/visual quality. This study investigates how enhancements in 5G technology, particularly through the optimization of protocols and latencies, can address these challenges and improve the effectiveness of telemedicine. By focusing on the important set of technologies, protocols, and architecture that support 5G networks – known as 5G enablers – this study aims to enhance connectivity and expand access to telemedicine services for a larger population of patients. Overall, in this project I examine how different aspects of 5G, impact telemedicine. By researching and potentially improving these aspects, I aim to tackle challenges like poor internet connectivity and audio/visual quality, making telemedicine more effective. Potential findings in this study will hold optimal speeds for which a telemedicine app can run, and so much more. This research represents a crucial step towards harnessing the full potential of 5G in changing the world of telemedicine. Looking forward, this research opens doors for further inquiry at the meeting point of 5G and telemedicine, with potential benefits in healthcare delivery and patient results.
Speaker
Speaker biography is not available.

5G Network Monitoring

Anshu R Patra (Winchester Highschool, USA); Aditya N Mishra (River Hill High School & Johns Hopkins University Applied Physics Laboratory, USA); Evan Quinn, Wiley Hensley and Riley Middleton (USA); Sahej S Batra (Johns Hopkins Applied Physics Lab, USA); Dhruv Das (USA)

0
We worked to develop monitoring software to characterize RF conditions of a new 5G Campus Outdoor Cellular Testbed during Johns Hopkins' ASPIRE internship program under the mentorship of Jessica Bridgland, Noah Hamilton, and Dr. Ashutosh Dutta. To analyze the efficiency of the network, we used different devices to pull various RF metrics. From SixFab's Raspberry Pi Modem to different phones, we pulled readings from across the campus. Keeping track of location, the devices monitored signal strength, noise, as well as other metrics and sent them back to a general database. We used Python Flask for a server where a database was created using PostgreSQL. As our project progressed, we had numerous future plans including a GUI and improved analysis. Focusing on anomaly detection, we could direct devices in certain areas to pull more readings to allow for more data and accurate conclusions about conditions in that zone. Our GUI could be a map graphic with which users can interact would model and display this data more efficiently. Poster: https://docs.google.com/presentation/d/1s-1OB5Njrg2Jxe_DyXVfSQa0pFXsQSoh6oe2XIWe8JY/edit?usp=sharing
Speaker
Speaker biography is not available.

Use of Computer Vision and AI techniques for enhancing performance at FIRST robotics competitions

Joshua Tewolde and John Tewolde (Grand Blanc High School, USA); Girma Tewolde (Kettering University, USA)

1
FIRST is a global community of students, mentors, parents, and sponsors who inspire and prepare the next generation of leaders in STEAM fields, through fun and engaging robotics competitions. The program is available in several categories covering the entire spectrum of students from Kindergarten all the way to high school: FIRST Lego League Challenge, FIRST Lego League Explore, FIRST Lego League Discover, FIRST Tech Challenge (FTC), and FIRST Robotics Competition (FRC). The challenges offered in the various competitions are designed to be appropriate for the educational background of the kids in the corresponding age group. The programs expose the students to real-world problems which helps them develop important skills, such as brainstorming ideas, design thinking, prototyping, troubleshooting, team work, problem solving, continuous improvement, communication and presentation. As technology advances the robotics competition challenges continue to push the teams to take advantage of the hardware and software tools that are becoming mainstream in industry, this includes sensors, actuators, and intelligent software algorithms. The primary focus of this poster to present the computer vision and AI tools that have been recently introduced in the FTC and FRC competitions. Teams that invest their time to learn about and understand such modern tools benefit by achieving a robot that has better awareness of its surrounding environment and that is able to detect objects more accurately, which helps achieve more reliable performance in navigation and execution of the tasks expected of the game. The authors give specific details from their experience in their participation at FTC and FRC over the last two years. Camera based vision system along with TensorFlow machine learning algorithm has been used in FTC to detect and classify game markers so the robot can appropriately execute the game's mission during the autonomous period of the game. More recently, FIRST introduced the use of AprilTag markers to further enhance the capability of the robots to execute their mission. Several AprilTags with unique IDs are placed at known fixed locations distributed around the field, which can be utilized by the robots as landmarks so they can improve the accuracy of their pose (position and orientation) estimate in the field. Detection and recognition of the AprilTags in the field requires teams to incorporate a camera-based vision system and the development of software algorithm, which is supported by the WPI Library. The poster shows how the use of advanced computer vision and TensorFlow based AI techniques have helped improve the capabilities of the robots used in FTC and FRC competitions. This work demonstrates that as technology continues to accelerate its pace of advancements, it is important to bring it to a level that can be utilized by younger students who will be inspired to become the next generation of inventors and innovators.
Speaker
Speaker biography is not available.

Enter Zoom
Session Poster-05

Poster 05 — Poster Virtual

Conference
12:30 PM — 1:00 PM EST
Local
Mar 9 Sat, 12:30 PM — 1:00 PM EST

Application of AI technology to non-destructive analysis of bronze rust

LingYe Jiang (China)

0
In the contemporary era marked by rapid technological advancements, the integration of artificial intelligence (AI) and computer technology unlocks new possibilities for the preservation and analysis of cultural artifacts. The innovative Vision Transformer model offers a novel approach to the non-destructive analysis of bronze rust by classifying images of bronze artifacts. This technology has the potential to significantly enhance the accuracy of artifact classification and exert a profound influence on the conservation and restoration of cultural relics. Notably, conventional destructive analysis methods often compromise the aesthetic value of artifacts. The presented research introduces a methodology that avoids damaging the artifacts. Through the segmentation of corroded areas based on pixel color ranges, significant results have been achieved in identifying the severity of corrosion on artifacts. This contributes to preserving the original state of the artifacts and elevates precision in classifying rust patterns. Essentially, the study pioneers the integration of cutting-edge technology in archaeology and artifact conservation. By providing more effective and accurate non-destructive analysis tools, the work propels advancements in the field, showcasing the potential of AI to revolutionize archaeological practices.
Speaker
Speaker biography is not available.

Study on the Dielectrics in the Triboelectric Nanogenerators(TENGs) to Convert Mechanical Energy to Electrical Energy

Junhyeong Lee (St George School, USA)

0
Triboelectric Nanogenerators (TENGs) use the triboelectric effect to convert mechanical energy into electrical energy. TENGs have attracted significant research interest for their various applications, such as in energy harvesting and wearable technology. Recent studies focus on selecting the right material combinations and device architectures to enhance energy conversion efficiency. In this project, a review of contemporary research trends and a theoretical understanding of the triboelectric effect were performed first to predict and optimize the performance of triboelectric materials and devices. The first part of this paper focused on the assessment of the capacitances cropped in the dielectric layers and electrodes. Using numerical analysis, we found the capacitance formed between the dielectric materials, and therefore, the total capacitance between the two metal electrodes was also calculated. The capacitances change with the geometrical and material properties of these dielectric materials. One of the challenges in triboelectric studies is understanding the underlying mechanisms at the microscopic and atomic levels. The optimized energy of the compound used in the unit influences the efficiency and stability of the triboelectric charge generation and transfer. Therefore, the second part of this paper focussed on the assessment of the thermodynamic and electrical properties of each material used in the dielectric layers through a molecular editing program equipped with an auto-optimization feature. This feature was used to determine the data on the atomic properties of the materials through the Density Functional Theory(DFT).
Speaker
Speaker biography is not available.

Study on Superconducting Materials Used in Maglev Train Using Physical Analysis and Computational Modeling

Haniel Jing (Horace Mann School, USA)

0
Superconductors play a crucial role in the operation of Magnetic levitation (Maglev) trains, offering a means to achieve efficient transportation through the use of magnetic fields, which are significantly enhanced by superconductors. High-temperature superconductors (HTS), such as Yttrium Barium Copper Oxide (YBCO), can conduct electricity without resistance at significantly high temperatures. The use of superconducting materials in maglev trains offers several advantages, including reduced energy consumption due to the elimination of friction, lower maintenance costs, and the ability to achieve higher speeds safely and quietly compared to conventional trains. YBCO becomes superconducting at temperatures around 77K, which can be achieved with liquid nitrogen, a more practical and less expensive cooling solution than liquid helium. In this project, rare-earth barium-copper-oxide (REBCO) bulk superconductors, which show high-temperature superconductivity, high mechanical strength, and resistance to strain, were studied theoretically and computationally. These materials are a subset of the broader family of cuprate superconductors. Various REBCO superconductors that have generic formulas often represented as (RE)Ba(2)Cu(3)O(7−x), such as Neodymium Barium Copper Oxide, Samarium Barium Copper Oxide, Europium Barium Copper Oxide, and Gadolinium Barium Copper Oxide, were computationally modeled and tested for their electrical and physical efficiencies. This paper focussed on the assessment of the thermodynamic and electrical properties of each superconducting material through a molecular editing program equipped with an auto-optimization feature. This feature was used to determine the data on the atomic properties of the materials through the Density Functional Theory(DFT).
Speaker
Speaker biography is not available.

Emotional Analysis Based on Text using X MBTI Data

Irene Songyeon Lee (Saint Paul Preparatory Seoul, Korea (South))

0
Recently, since various forms of text, photos, and videos are getting uploaded on the internet, the importance of utilizing data has increased. Among the online platforms with lots of users, X (twitter) allows users to post their lifestyle based on text and images and provide the latest news and information. Since the users create and share text about their individual lives, the data of X contains their language habitats and personalities. Moreover, as the importance of personal emotion is increasing, self-objectification based on the Myers-Briggs Type Indicator (MBTI) is now trending. MBTI divides personality into 16 detailed categories through the various selections, emphasizes the strength and weakness of each personality, and shares the result to the individuals. However, MBTI is not perfectly accurate, which can be changed based on the emotional states or feelings. This study trains a MBTI prediction model based on text data created from X. The trained model allows one to determine the difference between numerous feelings and psychological state immediately, and display the linguistic characteristics of each MBTI through data-driven statistics by data analysis.
Speaker
Speaker biography is not available.

Enter Zoom
Session Poster-06

Poster 06 — Poster Virtual

Conference
12:30 PM — 1:00 PM EST
Local
Mar 9 Sat, 12:30 PM — 1:00 PM EST

BioHOPPR: A tool to generate BIOgraphies by implementing a HistOrical Paper Persona Ranker

Ananth Narayan (Williamsville East High School, USA)

0
The National Digital Newspaper Program (NDNP) is a partnership between the National Endowment of Humanities (NEH) and the Library of Congress (LC) to develop an Internet-based searchable database of US Newspapers, dating as far back as 1690. Chronicling America (https://chroniclingamerica.loc.gov/) is a website that provides access to these historical newspapers. These newspapers offer a wealth of information - news from around the world in the relevant time period, detailed descriptions of local events, advertisements, images, and more. Many newspapers from NY state have been digitized including ``The New York Herald" (13677 issues), ``The New York Tribune" (19999 issues), ``The Sun" (18980 issues), ``The Evening World" (14941), and others. The process of digitization of a newspaper requires many steps such as scanning the newspaper pages or microfilm, creating image files, assigning metadata, and finally, running Optical Character Recognition (OCR) software to create a searchable full-text repository. Unfortunately, the process of OCR scanning is far from perfect, generating garbled text and rendering search difficult, if not impossible. In this research, we explored the steps involved in extracting people's names from garbled text generated by the OCR software, ranking them to find influential people whose biographies can be written and discussed. The software I designed to aid this process has the following components: a. Date Range Extractor. The program's purpose is to take user input on two dates and then gather all the issues of a pre-specified newspaper (such as ``The Sun" used in this research) from that date range, including all pages from each issue. The program starts by taking the input of dates from the GUI, then it passes those dates into a function that will execute the downloading of the pages. The dates are looped through from the beginning date until the end date inclusive, download the text portion of the JSON file from the Library of Congress and then put it in a text file with an appropriate name. b. Named Entity Extractor - Identify person names and which specific text files the person names occur in and display that data in a spreadsheet. After each page is downloaded it will run a function on the page where the person names are extracted using the Named Entity Recognition (NER) software (https://nlp.stanford.edu/software/CRF-NER.shtml). The names and the corresponding file in which it occurred are then each stored in lists. c. Ranker - Transfers the text files for the papers and the spreadsheet to the main program to generate rankings for the overall significance (or influence) of the names in the papers. d. Visualization Module - This module will take a person's name identified from a given page, look for relevant information about the person, and display it in addition to giving the user a prompt to add more information about him. This data will be stored in a database for future use. While we have currently focused on ``The Sun" newspaper, our software is generic enough to be able to work with other newspapers with minimal edits to the codebase.
Speaker
Speaker biography is not available.

Application of Negotiation Model in Game Theory

Coco Zhang (USA)

0
Negotiation model is a field in game theory that provides optimal strategies for negotiators in certain events. In resource allocation, negotiation models optimizes the benefit for competitors and seeks for equilibrium in the market. The purpose of this research is to use game theory to model real life negotiation in the debt market with a mathematical approach and quantitative expression. Negotiators in this scenario will be sellers and buyers in the debt market, with revenue/ compensation stands for the payoff functions. Additionally, we try to quantify real life situations and relationships into mathematical game theory models. Furthermore, the bargaining model is used for incomplete information war in the market.
Speaker
Speaker biography is not available.

The Transformative Role of Artificial Intelligence in STEM Education: Opportunities, Challenges, and Future Directions

Karthik P Menon (Mount Olive Highschool, USA)

0
In recent years, the landscape of STEM education has been reshaped by the integration of Artificial Intelligence (AI). This essay goes into the connection between AI and STEM education, exploring the opportunities, challenges, and future trends associated with this collaboration. Creating a visually engaging and informative poster on the impact of Artificial Intelligence (AI) in STEM Education involves a strategic design approach. The section on AI-powered adaptive learning environments, adding dynamic visuals that show the customization of learning to individual student needs. Infographics would display comparative data on student outcomes before and after AI integration, showing improvements in engagement and learning retention. In addressing challenges and ethical considerations, the poster would use concise and impactful visuals to represent concepts like data privacy and algorithmic bias. Prominently feature questions that prompt viewers to reflect on the ethical dimensions of AI in education, fostering a sense of awareness and critical thinking The section dedicated to teacher training can benefit from images capturing educators actively participating in professional development programs. Quotes or testimonials will be integrated to convey the transformative effect of AI training on teachers and, subsequently, on students' learning experiences. The inclusive representation of AI in fostering diverse learning styles and abilities can be reinforced with visuals that celebrate diversity. Incorporate symbols or images that convey the idea of AI as a tool breaking down educational barriers, promoting an inclusive learning environment for all students. In the assessment methods section, using clear and concise infographics would help to visually communicate the shift towards more dynamic and responsive evaluation techniques. These visuals can effectively convey how AI contributes to real-time feedback, enhancing the overall assessment process and providing more understanding of student progress. This can serve to captivate viewers' curiosity and emphasize the evolving nature of AI-enhanced learning environments. In conclusion, ensure the poster offers a balanced and visually appealing portrayal of AI's transformative impact on STEM education. This would invite viewers to explore the connection between technology and learning further. In conclusion, this poster shows how Artificial Intelligence (AI) is changing STEM education.
Speaker
Speaker biography is not available.

The Apple Test

Shreya Gopal (Summit High School, USA)

0
We see food that we thought may be good become spoiled and rotten quicker than we expected. By using different wrapping methods, we'll determine which method keeps an apple fresher for a longer period of time. Spoiled food can look, feel, and smell unpleasant, and can make you very sick if you eat it. Food gets spoiled when microorganisms start living in the food. These microorganisms can consist of fungus, such as mold, yeast, and bacteria, causing food to decay and develop unpleasant odors, tastes, and textures. You will see which types of food wrapping keep sliced apples the freshest in the refrigerator. Throwing away food is a waste of money. Save your family money by investigating how to keep your food fresh longer. I cut an apple into 4 pieces and test different methods of preservation to see which way made the apple take longer to brown. On the first apple piece I didn't put anything on it, the second piece I wrapped with aluminium foil, the third piece I wrapped with wax paper and the fourth piece I wrapped in plastic wrap. Each day for 3 days I will keep an eye on the apples to see which ones brown the most and quicker. Using this technique I will figure out the best way to preserve food and limit food waste.
Speaker
Speaker biography is not available.

Enter Zoom
Session Poster-07

Poster 07 — Poster Virtual

Conference
12:30 PM — 1:00 PM EST
Local
Mar 9 Sat, 12:30 PM — 1:00 PM EST

Study on Digital Technology in Urban Planning, Sustainable Architecture, and Economy

Minjun Sean Choi (Phillips Academy - Andover, USA)

0
As cities continue to evolve, developing a synergy between urban architecture, sustainability, and economy is essential for creating environmentally responsible, socially inclusive, and economically vibrant cities. Digital technology influences every aspect of urban life, from how cities are planned and managed to how they grow economically and sustainably. Urban architecture, sustainability, and business are interconnected, influencing cities' livability and economic viability. The relationship between these elements is important as we face resource scarcity, energy crisis, and climate change. Businesses are adopting sustainable practices and digital technology that have significantly impacted urban planning and architecture. Through 3D modeling and visualization using Building Information Modeling (BIM) software, architects and planners can design and simulate spaces' physical and functional characteristics. Improving decision-making, understanding, and communication of complex ideas among project stakeholders is possible. This research studied how big data, modeling software, and analytics tools enabled urban planners to analyze and understand urban patterns, land use, businesses, and environmental factors, leading to more informed decisions. Also, smart city technologies developed using IoT(Internet of Things) were investigated to see how they support the development of responsive, efficient, and sustainable urban environments.
Speaker
Speaker biography is not available.

The Mathematics of Kolam

Mitra Iyer (USA)

0
Geometry is an integral part of daily life in South India. Every morning, homes across that region lay new geometric patterns outside their entrance to bring prosperity to their household. The art form of kolam, which originated in South India, has been a custom for generations. Traditionally, the women of the household would meticulously draw the patterns with a chalky powder or rice flour and use specific designs to communicate feelings or narrate tales. Regardless of education or social status, families in South India will clean their doorsteps to participate in this age-old tradition. The traditional patterns of kolam are rooted in intricate geometric designs and symmetry, and the designs mostly consist of dots that are encompassed by infinite paths of loops. Along with being pleasing to the eye, kolam patterns contain several mathematical postulates. For instance, fractals are frequently observed in these patterns, and the principle of symmetry is also apparent in the designs. Another example is the concept of infinity, which is represented through the never-ending loops that are seen in many traditional kolam patterns. Symmetry rules are also represented through these kolam patterns. Four-fold symmetry can be represented through kolams, and many designs use this rule. When the pattern is rotated 90, 180, and 270 degrees, the design will stay the same. In addition, images can be created using kolam patterns. For example, flowers can be created using loops in the center of the design, and petals can be created using symmetrically placed lines. When divided in half, the design would be identical on both sides when reversed. To add on, almost every Kolam pattern has some form of symmetry in it, whether it is transitional, reflectional, or rotational. My project will focus on the three categories of kolam; geometric, freehand, and pulli (dot) kolams. Geometric kolams rely on intricate mathematical patterns such as grids, symmetry, and fractals to create symmetrical designs. Freehand kolams, on the other hand, allow for more creativity and spontaneity, often incorporating natural motifs and abstract shapes. Pulli kolams involve connecting dots with lines to form elaborate patterns, using mathematical concepts like connectivity and spatial reasoning. Mathematics underpins each category of kolam, influencing everything from the arrangement of dots to the precise angles and proportions used in their construction, showcasing the fusion of art and mathematics in Indian cultural traditions.
Speaker
Speaker biography is not available.

Opportunities and Challenges for 5G supporting High User Mobility

Arpita Behera (USA)

1
5G is the fifth generation of wireless cellular networks. 5G technology provides faster speed, ultra-low latency and greater bandwidth which has the potential to transform user mobility. 5G wireless technology is expected to enable a fully mobile and connected society. The use of 5G technology in high user mobility scenarios promises to improve many aspects of connectivity. With growing demand for mobile services in homes, vehicles, trains, and even planes, in both civil and defense applications, 5G technology has immense potential to transform user experience and expand 5G use. The usage of 5G technology in enhanced mobility services presents numerous opportunities including transforming vehicle, train, and aircraft experiences. 5G mmWave technology has many advantages compared to 4G, but presents many disadvantages. Its short wavelengths allow it to have a higher bandwidth so it has a high data transfer rate leading to shorter download times. However, short wavelength 5G technology has low coverage and a shorter range. This can present challenges – for instance, high-speed trains are able to reach speeds greater than 500 km/h which 5G technology can have trouble keeping up with. In order to resolve this range issue, the need to build cell towers at more frequent and regular intervals along a path of travel could be infrastructure intensive and costly. Another obstacle that comes along with 5G is that it is very easily disrupted as it has difficulty passing through objects. To ensure and maintain quality of service on a train or a vehicle the device must connect to different cell towers at a fast pace and have cell towers at consistent intervals to ensure its low range and easily disruptive properties don't pose a hindrance to its functionality. Understanding the opportunities and mitigating the challenges of 5G in high user mobility sectors would be crucial to advancing mobile connectivity. This poster highlights the challenges and opportunities associated with 5G supporting high user mobility with various use cases.
Speaker
Speaker biography is not available.

The STEM Behind Basketball

Christian Michael SantosSilva (Georgetown Day School, USA); David Godwin (The Edmund Burke School, USA); Joseph Connell (The Field School, USA)

0
Introduction Students sit in geometry or physics agonizing over these classes and wondering when and where they will ever use this information. Our love for these subjects has sparked our interest in perfecting our love for basketball. How do you sharpen your shot on the basketball court and find consistency with it? Christian SantosSilva, David Godwin and Joseph Connell worked together on this project in hopes to better their accuracy in shooting. Abstract Professionals have multiple theories on how to improve your scoring in a game. We decided to dig deep into these theories in hopes of perfecting our talent within the sport of basketball. In reviewing and analyzing research dedicated to this project, we have found that the arc, angle, energy, and repetition used in shooting the ball factors into the success of making the shot. The trajectory of the ball happens in two motions; uniform motion for the ball to travel upward and uniform downward motion due to Earth's gravity. When the ball is shot it travels in a parabolic trajectory. For a player to improve their likelihood of scoring a basket they must also increase the apex in which the ball travels; this is to increase the shooting angle, creating greater force. The flatter the shot the less room the ball has to make it in the basket. The more arc you have on the shot the more surface area for the ball to travel. Research states that the ideal arc should be at 45-50 degrees. This angle will ensure that the ball has enough space to travel into the basket, avoiding hitting the rim. Research also suggests that targeting a vertical axis residing 3.326 inches behind the backboard can also heighten scoring accuracy. Recognized as a bank shot, this precision type of shot entails locating that optimal point on the backboard to ensure a made basket. Mastering the angles related to this type of shot can accentuate shooting proficiency. Multiple professionals recommend to increase your percentage in making shots, a player must shoot daily using the optimal angle of release, creating muscle memory when shooting. But what does this mean? Huupe suggests 333 shots per day while Basketball Mind advises 250 close range shots followed by 100 in different positions. We will explore these hypotheticals to determine which will set us apart from others when scoring in the game. Sources: The Physics of Basketball, Physics Of Basketball (real-world-physics-problems.com) Optimal Targets for the Bank Shot in Men's Basketball - Citation Index - NCSU Libraries How to Shoot Better in Basketball | 13 Tips to Improve Your Shooting, 12/02/2023, How To Shoot Better In Basketball | 13 Tips To Improve Your Shooting (basketballmindsettraining.com) How Many Shots a Day You Have to Take to Play College Basketball, How Many Shots a Day Do You Have to Take to Play College Basketball? (huupe.com) Importance of Arm & Wrist Angles, The Overlooked Importance Of Arm & Wrist Angles (breakthroughbasketball.com)
Speaker
Speaker biography is not available.

Enter Zoom
Session Poster-08

Poster 08 — Poster Virtual

Conference
12:30 PM — 1:00 PM EST
Local
Mar 9 Sat, 12:30 PM — 1:00 PM EST

Reimagining the Insulin Pump User Experience

Daniel L Perez (Holy Ghost Preparatory School, USA)

0
Insulin delivery pumps are essential wearable devices for millions of individuals worldwide with diabetes. It is certainly a challenge for medical device manufacturers to develop a single solution that can manage the insulin delivery needs for a wide variety of users. In this research we examine typical user requirements for an insulin delivery pump and associated insulin management devices in terms of technical software requirements and user needs. We use software to recreate the graphical user interface (GIU) and typical user experience (UX) design for existing technology. As a conclusion we identify features that will be involved in reimagining the next generation of insulin management devices and related software.
Speaker
Speaker biography is not available.

The Rise of Sustainable Real Estate: Eco-Friendly Development of Golf Course

Jaeyoon Kim (Westminster School, USA)

0
As technology advances and societal awareness grows, a path toward more resilient and sustainable urban environments becomes important in sustainable real estate development. The trend of shifting towards sustainable real estate development is motivated by increasing concerns about environmental issues and changes in consumer preferences. Recently, lots of efforts have been made to combat climate change, support the transition to a low-carbon economy, and meet the growing demand for green and healthy living environments. Sustainable developments often adhere to green building standards, which guide the design and operation phases. These can ensure buildings are energy-efficient and resource-conservative, and provide healthy living environments. This paper investigated how different stakeholders' approaches prioritize not only the economic and social aspects but also environmental protection and the efficient use of resources in real estate development, including golf course development. Choosing sustainable golf course sites with minimal environmental impact requires a comprehensive approach. Taking a golf course as an example, this work discussed the design of the golf course to fit the natural landscape, minimize vegetation removal, and incorporate existing natural features and habitats into the design to preserve biodiversity. Sustainable landscaping, such as using native or adapted plant species that require less water, pesticides, and fertilizers, was also discussed. This approach can help to ensure that golf courses are assets for recreation and biodiversity, demonstrating that sport and sustainability can go hand in hand. Lastly, challenges such as higher upfront costs and the need for specialized expertise in the development, which can pose barriers to adoption, are discussed in this work.
Speaker
Speaker biography is not available.

The Empowerment and Representation of Female Characters--The female image King of Glory draft

Ziqi Liu (No, China)

0
This paper utilizes feminist and cultural criticism methodologies, as well as behavioral science research methods, to analyze the influence of digital games as cultural symbols. It delves into the arguments surrounding the design and representation of female characters in King of Glory, exploring the reasoning behind these choices and scrutinizing the thought processes of designers and player preferences. The central objective of this paper is to conduct a comparative analysis between female and male hero characters in King of Glory, leveraging historical and mythological figures to emphasize the development of their distinctive skills and the incorporation of cultural symbols. This paper discusses the influence of digital games as cultural symbols, states the arguments about the design and portrayal of female characters in King of Glory and points out the reasons, states the thinking mode of designers and the consumption preferences of players, and sets the purpose of this paper, that is, to explore the comparison between female hero characters and male hero characters in King of Glory. Historical and mythological figures, highlighting the design of their objective skills and the use of cultural symbols, giving methods and suggestions for designing unique female characters, as well as their impact on player interaction and character design, emphasizing the focus of the game's narrative and interactivity, while giving suggestions and theoretical knowledge in combination with history. This paper explores the influence of cultural symbols on character design and the audience's sense of experience, and analyzes the consumption psychology of customers. Finally, it calls for King of Glory to increase the sense of social diversity of female characters, as well as enrich the objective skills and thinking ability of female characters.
Speaker
Speaker biography is not available.

An AI language translator breaking down the barriers of interlingual communication

Wonjae Choi (Chadwick International School, Korea (South))

0
In contemporary society, communication difficulties and conflicts often arise due to intergenerational differences in language and culture. In particular, when there are significant disparities in the understanding of language usage and meaning, smooth conversations and effective communication may not be achieved. Sejong GPT translates Chinese characters and idiomatic expressions (such as "나흘," "명일," "금일," etc.) into simpler words for the MZ generation, which has lower reading comprehension skills due to the proliferation of various media and the internet. Additionally, it helps the older generation, who may not be familiar with the latest slang, interpret texts that contain contemporary neologisms (such as "알잘딱" and "존버"). By breaking down these language barriers, it serves the function of resolving social conflicts between generations. Additionally, it aids in making various online texts more easily understandable by representing text as images or condensing lengthy sentences.
Speaker
Speaker biography is not available.

Enter Zoom
Session Poster-09

Poster 09 — Poster Virtual

Conference
1:00 PM — 1:30 PM EST
Local
Mar 9 Sat, 1:00 PM — 1:30 PM EST

Development of Alternative Low Pass Filters to Improve the Quality of Digital Images

Jeesung Lee (St. Johnsbury Academy Jeju, USA)

0
Different sophisticated approaches are often applied to the image to enhance image resolution or quality. Employing a low-pass filter can improve image resolution by reducing high-frequency content. When dealing with noisy images while preserving as much of the important details as possible, a low-pass filter can contribute to perceived image quality improvement. The first step of the analysis is to collect frequency data. Discrete Fourier Transforms(DFT) are commonly used to obtain frequency representations of the corresponding images. Various filters are then designed and applied to obtain frequency-related information. In this paper, a few alternative types of mathematical filters, such as the modified Gaussian filter, Boxcar filter, and trigonometric filter, were employed to produce images with better or different quality. First, as the normal process does, the numerical information on the frequency domain was converted into the information on an image domain using the Fourier Transformation. In this process, different types of modified filters were employed and showed their distinct features and output images. A non-conventional method was tested, transforming the information on the image domain into a frequency space. During the presented process, a few filters turned out to be effective at reducing noise while preserving important image details. While this process does add new details or truly enhance the resolution of the image, the results showed some sensitive outcomes depending on using different constants in their filter formula. In that case, it's desired to use test filters that can make the image more aesthetically pleasing in some contexts and serve as a preprocessing step for further image analysis or enhancement techniques that require noise reduction.
Speaker
Speaker biography is not available.

Coherent Client-Side Deduplication of Encrypted Form of Data and Public Auditing in Cloud

Debjyoti Das (Amrita Vishwa Vidyapeetham, India); Kavitha C. r (Amrita School of Computing, Bengaluru, Amrita Vishwa Vidyapeetham, India)

0
Today, Cloud storage is growing rapidly and has emerged as a quintessential component of cloud computing, stores records or data and assist any kind of applications. Enterprises select to use cloud storage due to the fact they are supplying cost friendly and better replacement on local storage. Commercial enterprise procedure are so often unmasked to danger of leakage, as information stored in cloud, thus are prone to safety risks. Data redundancy has been an enormous trouble that wastes enormous amount of storage house in the cloud storage environment. In the present times there is a massive demand for records storage offerings like, Dropbox, however as the quantity of information saved is huge, the information may from time to time be lost as storage servers are prone to malicious assaults so the want for and integrity of their statistics is important. This hassle can be efficiently decreased and correctly managed through statistics techniques, disposing of replica statistics in cloud storage systems. To limit the quantity of information, we can apply servers which helps to increase the storage house efficiency by eliminating duplicated files, client-side has ended up more effective due to its effectiveness in computation and communication. The proposed work has developed a new method with the help of Elliptic Curve Cryptography encryption algorithm on the cloud environment to build an established structure. The redundancy of the information stored in the cloud as well the encryption method is proposed which maintains the authenticity of the data.
Speaker
Speaker biography is not available.

Decoding SAT Scores: A Multifaceted Analysis of Demographic Factors Influencing SAT Scores Across Diverse Regions

Margaret Liu (Weston High School, USA); Wei Lu (Keene State College, USNH, USA); Linda Zhao (University of Pennsylvania Wharton, USA); Hong Lu (University of Toronto, Canada)

1
Introduction: SAT scores have traditionally been considered an important part of college admissions and reflect a student's academic capacity. However, various studies showed that factors besides a student's academic intelligence, such as socioeconomic background and ethnicity, are significantly associated with students' SAT scores. Most of those analyses were based on individual test takers' scores and background information from one or multiple schools. This study employed aggregated school-level data to assess the quantitative relationships between average SAT scores and school-level demographics and interventions. The assessment aims to help regional and national education policymakers identify factors related to school academic merits and devise inclusive and effective ways to promote educational equality. The study extracted two SAT score datasets from public High Schools in Massachusetts and New York City. Having both data sets allows for comparative analysis and broadens the scope of the findings. Methodology: Three analytical methods - multiple linear regression, relaxed Least Absolute Shrinkage and Selection Operator (LASSO), and decision trees - were conducted sequentially to decipher complex relationships among variables. Multiple linear regression identified the significant factors and estimated the effects on SAT scores. The relaxed LASSO technique was applied to refine the model by eliminating less significant predictors. Decision tree analysis was used to predict the outcomes in a complex setting of interacting multiple variables. Results: Analysis suggest a significant correlation between SAT scores and certain demographic factors. Schools with a large proportion of African American or Hispanic students and students from low-income families tend to have lower average SAT scores. In contrast, schools with a large portion of Asian students tend to have higher average scores. In Massachusetts, a 1% increase in the percentage of low-income students or African American students would lead to a decrease of 5.7 points and 2.5 points in the school's average SAT score, respectively. In contrast, a 1% increase in the percentage of Asian students would lead to an increase of 3.2 points in the school's average SAT score. In New York City, the model estimated that a 1% increase in the percentage of Hispanic or African American students would decrease 4.2 and 4.6 points in the school's average SAT score, respectively. The factors that positively or negatively impact the school's average SAT score are the same between the two regions. Conclusion/Future Work: The results suggest that statistically, there are SAT score gaps between races and class. Schools with high percentages of Black, Hispanic, and low-income students generally have lower average scores than schools with high percentages of White, Asian, and well-off students. The results indicate that more SAT preparation resources are needed at schools with higher percentages of Black, Hispanic, and low-income students in order to level the playing field in SAT testing. Future analysis will incorporate additional regional data to further generalize the conclusion. Moreover, the methodologies will be refined to account for limitations and variations in the data.
Speaker
Speaker biography is not available.

Mechanism Study and Bioinformatics Analysis of Rutin Inhibiting Inflammatory Response After Distraction Spinal Cord Injury

Junrui Jonathan Hai (PRISMS of Princeton, USA); Bo Han, Weishi Liang and Xianjun Qu (Capital Medical University of China, China)

1
Background and Purpose: Spinal scoliosis is a medical condition characterized by an abnormal lateral curvature of the spine. Distraction spinal cord injuries (DSCIs) often occur as severe neurological impairments caused by the inability of the spinal cord to tolerate excess distraction stress during scoliosis correction surgery. This study is a molecular biology research aimed at investigating the mechanism and potential biomarkers of rutin-inhibiting inflammatory responses after DSCIs. Methods: Prior to surgical intervention, all rats were randomly divided into three groups: the sham group, the DSCI group, and the DSCI with rutin treatment (RT) group. After surgery, the Basso-Beattie-Bresnahan (BBB) score and slope test score were used to evaluate changes in the rats' neurological functional changes. Seven days after surgery, histopathological examinations of the spinal cord tissues were performed. We performed genome-wide transcriptional profiling of the spinal cord between DSCI and RT rats by high-throughput RNA sequencing one week after SCI. We also compared genome-wide transcriptional profiles from DSCI and sham rats. Protein-protein interaction network analysis and chemical analysis docking technology, and in vitro LPS-stimulated BV2 cells validated the role of MAPK13 as a pivotal gene in the inflammatory response of rutin to DSCI. Result: This study identified 256 differentially co-expressed genes in sham, DSCI, and RT rats. Gene Ontology and Kyoto Encyclopedia of Genes and Genomes enrichment analyses were employed to delineate the biological characteristics of these genes in terms of cellular components, biological processes, and molecular functions. Enriched functional pathways, such as the MAPK pathway, regulation of the inflammatory response were discovered. Validation through qRT-PCR confirmed rutin's efficacy in treating DSCI by modulating the expression of MAPK13, SOST, Htr2b, GDF3, and Gpnmb proteins. In vitro experiments revealed that the MAPK13-mediated downstream proinflammatory pathway is a crucial mechanism by which rutin exerts its effects. Conclusions: This study elucidates the anti-inflammatory mechanism of rutin in treating DSCI, potentially offering novel targets for DSCI intervention. This research is anticipated to offer new hope in the treatment of DSCI, while also delivering targeted strategies for the precision application of natural medicine.
Speaker
Speaker biography is not available.

Enter Zoom
Session Poster-10

Poster 10 — Poster Virtual

Conference
1:00 PM — 1:30 PM EST
Local
Mar 9 Sat, 1:00 PM — 1:30 PM EST

AI Powered Mobile Analysis of Scoliosis among Children in Qinghai-Tibetan Plateau of China

Junrui Jonathan Hai (PRISMS of Princeton, USA); Nan Meng (The University of Hong Kong, Hong Kong); Moxin Zhao (Clinical School of Medicine the University of Hong Kong, Hong Kong); Jason Pui-Yin Cheung (The University of Hong Kong, Hong Kong); Teng Zhang (Clinical School of Medicine the University of Hong Kong, Hong Kong)

1
Adolescent idiopathic scoliosis (AIS), a common 3D spinal deformity affecting up to 2.2% of boys and 4.8% of girls, often worsens during puberty, leading to decreased quality of life and mobility. Traditional physical examinations are subjective and ineffective in detecting specific deformities, while radiographic assessment is inaccessible, particularly in remote areas. To address these challenges, we developed AlignProCARE, a mobile app for spine alignment analysis. Our study introduces ScolioNets, an AI-powered algorithm deployed on AlignProCARE, offering radiation-free and early detection of scoliosis. We conducted a validation study in Qinghai-Tibetan Plateau, China, using a smartphone to assess ScolioNets' efficacy and accuracy. During July-August 2023, we studied 80 AIS patients from the Qinghai-Tibetan Plateau, China, averaging 14.65 years old. We used an iPhone Pro 12 smartphone with Lidar to collect unclothed back images of patients standing. Each patient's RGB and depth images were captured and uploaded to our AI server for analysis using ScolioNets. We also employed Score-CAM, a visualization technique, to highlight the algorithm's focus regions using a heatmap, showing areas supporting classification decisions. Results: In prospective testing, 80 patients (mean age: 14.65 [SD 1.77]) were assessed using ScolioNets. Three cases required no interventions, while 77 cases required varying levels of intervention. ScolioNets demonstrated good performance in recommending follow-up treatment, with an area under the ROC curve of 0.82. It effectively distinguished between normal-mild (Sensitivity = 1.0, Specificity = 0.97) and moderate-severe (Sensitivity = 0.97, Specificity = 1.0) cases. Additionally, it differentiated between subjects requiring treatment or not with an AUC of 0.82. Discussion and Conclusion: AI powered mobile analysis as AlignProCARE with ScolioNets provides accessible and accurate mobile AIS assessments, with advantages of radiation-free, cost-effective and facilitates widespread adoption. It significantly advances scoliosis screening efforts for adolescents, contributing to improved spinal health.
Speaker
Speaker biography is not available.

Investigating the Mechanisms of Microglia/Macrophage Activation in Mediating Inflammatory Responses following Distraction Spinal Cord Injury

Junrui Jonathan Hai (PRISMS of Princeton, USA); Bo Han, Weishi Liang and Xianjun Qu (Capital Medical University of China, China)

1
Background and Purpose: Scoliosis is a medical condition characterized by an abnormal lateral curvature of the spine. Distraction spinal cord injuries (DSCIs) often occur as the neurological complication following severe scoliosis correction surgery, and in severe cases, paralysis may appear. Inflammation is an important mechanism for the aggravation of spinal cord tissue injury after DSCI, but the inflammatory pathway and microglial/macrophage activation mechanisms of DSCI are still unclear. Therefore, the present study aimed to investigate the activation of microglia/macrophages, along with changes in the TLR4-mediated NF-κB and MAPK pathways after DSCIs in Bama miniature pigs. Methods: Pigs were divided into three groups: sham, complete distraction spinal cord injury (CDSCI), and incomplete distraction spinal cord injury (IDSCI). Behavioral changes were assessed using the Tarlov scale and individual limb motor scale (ILMS). After seven days, histopathological examinations were conducted. Immunohistochemistry was used to detect Caspase-3 expression, while immunofluorescence was used to assess the M1/M2 phenotype changes in microglia/macrophages and NF-κB P65 expression. Western blotting was performed to determine the expression of TLR4/NF-κB/MAPK pathway-related proteins. Results: The results demonstrated significant decreases in Tarlov and ILMS scores in both DSCI groups when compared to the sham group. Hematoxylin and eosin (HE) and Nissl staining revealed substantial disruption in the tissue structure and nerve fiber tracts within the distracted spinal cord tissues. Both DSCI groups exhibited a reduced number of surviving neurons and increased expression of Caspase-3. Immunofluorescence staining showed increased expression of CD16 and CD206 in microglia/macrophages in both DSCI groups. Furthermore, the CDSCI group exhibited higher CD16 and lower CD206 expression levels compared to the IDSCI group. Additionally, the intensity of NF-κB P65 fluorescence was significantly enhanced in pigs with DSCIs. Western blotting results showed increased expression of TLR4, p-IκBα, NF-κB P65, p-JNK, p-ERK, and p-P38 proteins in spinal cord tissues following DSCI. Conclusions: The present study indicated that continuous mechanical distraction of the spinal cord in Bama miniature pigs resulted in decreased neurological function, histopathological lesions, and neuronal apoptosis, which increased in severity as the degree of the DSCI increased. Study results regarding the mechanism behind DSCIs suggested that inflammatory TLR4/MAPK/NF-κB pathway and microglia/macrophage activation is an important mechanisms of inflammatory injury and tissue injury aggravation after DSCI. This study successfully established a large-animal model to simulate clinical DSCI, and the research results provide experimental evidence for further investigating the DSCI mechanisms and potential anti-inflammatory targets.
Speaker
Speaker biography is not available.

Ryan's Lego-Transformer Creations: Building Dreams with Bricks and Imagination

Mehdi Roopaei (University of Wisconsin - Platteville, USA)

0
Hi! I'm Ryan. I'm almost 6 and go to preschool. I really like to play with my Lego and transformer toys. They're super fun because I get to build stuff. My dad says I have a good 'mindset' because I like to think when I play. I also try to make cool things with my Lego. That's my 'skillset'. So, I had this cool idea to mix Lego and transformers together. I call them Lego-Transformers! It's kind of hard because transformers can change from robots to cars, and I try to make my Lego do that too. I think hard about how to fit the Lego pieces so they can change like that. I started doing this because I love both Lego and transformers. It's a bit tricky to make Lego change shapes, but I really like trying. I use my imagination to solve this puzzle. It's like a game where I figure out how each piece should go. I do this project mostly by myself, but my dad helps me, especially with writing this because I'm still learning to write better. He also gives me ideas when I'm stuck. When I first thought of this, I looked for Lego pieces that could move like the transformers. I wanted to see if I could make them bend or twist. Then I started building, trying to make a robot that could turn into a car. I had to think about where each Lego piece should go. After I build one, I check if it can really change from a robot to a car and back. Sometimes, it doesn't work right, and I must try again with different pieces. I change my Lego-Transformers a lot. Sometimes, they don't look cool enough, or they don't stay together. But I don't give up. I keep trying until it looks great and works well. Every time I make a new one, I learn something new. It's awesome to see my ideas turn into real Lego toys I can play with. I did what I wanted to do, and it's so much fun to play with them. What I'm planning to do next is make more Lego-Transformers. I want to try different kinds and maybe make them even cooler. I also want to get better at building them. Maybe I can show my friends and see what they think. That would be fun. Next year, I want to make a whole bunch of them, like a collection. Maybe I can make ones that are different colors or even ones that look like animals. That would be cool. I might even show them at school or somewhere else where people can see what I made. This project is special to me. I like building and creating things. It makes me happy to see what I can do with my Lego and my imagination. My dad says it's good to keep learning and trying new things, and that's what I'm doing with my Lego-Transformers.
Speaker
Speaker biography is not available.

Absorption Characteristics of Photons in Nano-metallic Structure Using Numerical and Computational Analysis

Richard Kyung (CRG-NJ, USA); Wonse Kim (St. Paul School, USA)

0
Recently, plasmonic metamaterials have garnered significant attention due to their unique absorption properties. Plasmonic metamaterials can manipulate light absorption at the nanoscale, which leads to strong interactions between light and matter, such as enhanced absorption, scattering, and localization of electromagnetic fields. Those metamaterial absorbers are designed to suppress reflection and transmission while maximizing absorption, making them useful in various applications such as stealth technology, thermal emitters, and photodetection. In this work, the feature of plasmonic metamaterials to tune the absorption properties over a wide range of wavelengths was studied. By adjusting the geometrical parameters and material composition, this paper tailored the absorption spectrum of the plasmonic materials. They can be used to enhance the efficiency of solar cells by increasing light absorption across a broad spectrum of wavelengths. Through the process, an optimal incident angle of the various lightwaves and the effective index of refraction was calculated using Maxwell's equations, followed by dispersion relation and modeling of various metamaterials. Depending on whether single-period and multi-period materials are used, the propagating waves of light in the metamaterials are shown in an unconventional manner, such as hyperbolic dispersion, which is anisotropic propagation. For this research project, COMSOL, Matlab, and a data spreadsheet were used to create a model of the metamaterial and graphical simulations of the photon. The modeling included virtually constructing metamaterials of specific dimensions and indices of refraction and simulating photons passing through the metamaterials.
Speaker
Speaker biography is not available.

Enter Zoom
Session Poster-11

Poster 11 — Poster Virtual

Conference
1:00 PM — 1:30 PM EST
Local
Mar 9 Sat, 1:00 PM — 1:30 PM EST

Empathetic Mental Health Support: Building an AI Therapy Chatbot for High School Students using Large Language Models

Kareem Boukari (Caesar Rodney High School & Delaware State University, USA)

0
Mental health challenges are a growing concern among high school students. Many unfortunately choose to suffer in silence rather than discuss their issues. This creates an urgent need for accessible, non-judgmental, and confidential support. To tackle this issue, I propose the development of a therapist chatbot utilizing Large Language Models (LLMs) to engage in natural and empathetic conversations with high school students. The project aims to develop a therapy chatbot adapted for high school students by using the capabilities of a Large Language Model (LLM) that was fine-tuned using an adapted conversation dataset. This chatbot will provide a supportive conversational experience for high school students seeking mental health assistance. The primary goal is to provide a mental health support system that addresses the specific needs of high school students using emerging technologies like AI, Transformers and language models. The project will involve data collection, training, and fine-tuning of the LLM to ensure empathetic, effective, and adaptive interactions that closely resemble professional responses. The data collection involves a diverse and representative dataset of mental health-related conversations and scenarios. We chose to use the Bidirectional Encoder Representations from Transformers (BERT) LLM that will undergo training and fine-tuning using the collected dataset to adapt the LLM responses to the high school students problems. After this stage, we develop a chatbot interface that will be designed with a focus on creating a safe and non-judgmental environment for students ensuring privacy and confidentiality. An important part of this project is the testing and study of feedbacks from the chatbot to ensure safety before deployment of the model. This project focuses on offering a safe conversational system with empathy and sensitivity providing appropriate support to students in distress. The resultant chatbot will serve as a valuable resource for students to open up to a safe and confidential supportive space where they will not be judged, offering them a safe space to discuss their concerns and receive safe guidance, with the aim of reducing barriers to seeking help and promoting overall well-being on campus. This project represents an innovative approach to providing mental health support to high school students and demonstrates the potential of AI in addressing real-world challenges. The therapist chatbot will complement existing services and contribute to a healthier high school environment.
Speaker
Speaker biography is not available.

Wearable ultrasound devices for blood pressure measurement: a simulation study

King Ho Guo (UWC CSC Chang Shu College, Japan)

0
High blood pressure poses a significant risk for stroke, heart disease, and heart attacks, contributing to 5 million stroke-related deaths and disabilities annually worldwide. Individuals over 70 face a 75% likelihood of high blood pressure, underscoring the need for real-time monitoring to mitigate these risks. While traditional methods like blood pressure cuffs and ECG machines are effective, they lack practicality for continuous monitoring. A wearable ultrasound device, recently developed by researchers, offers a portable solution. Using an ultrasound array, it measures blood pressure by analyzing blood vessel distances, ensuring 24-hour monitoring capability. The device employs ultrasound penetration and a 4x4 piezoelectric element grid, providing accurate measurements up to a certain depth. In our previous work, the device's design has been digitized by a simulation program based on k-wave ultrasound simulation, which provided potentials to maximize signal-to-noise ratio and thus sensitivity. In this work, a genetic algorithm was developed in the simulation program to optimize array distribution sensitivity. Genetic algorithms mimic natural selection, aligning sensor positions and evaluating their effectiveness through numerous iterations. It was realised in the simulation program by setting ultrasound array distribution as binary vectors that were fed into the genetic algorithm. As a result, the sensitivity of the ultrasound device was enhanced after 3000 iterations of optimisation. This research represents a significant advancement in real-time blood pressure monitoring, offering a practical, low-cost solution for optimising wearable ultrasound device and thus mitigating health risks associated with high blood pressure.
Speaker
Speaker biography is not available.

Enter Zoom
Session Poster-12

Poster 12 — Poster Virtual

Conference
1:00 PM — 1:30 PM EST
Local
Mar 9 Sat, 1:00 PM — 1:30 PM EST

Enter Zoom
Session Poster-13

Poster 13 — Poster Virtual

Conference
2:30 PM — 3:15 PM EST
Local
Mar 9 Sat, 2:30 PM — 3:15 PM EST

Enter Zoom
Session Poster-14

Poster 14 — Poster Virtual

Conference
2:30 PM — 3:15 PM EST
Local
Mar 9 Sat, 2:30 PM — 3:15 PM EST

Enter Zoom
Session Poster-16

Poster 15 — Poster Virtual

Conference
2:30 PM — 3:15 PM EST
Local
Mar 9 Sat, 2:30 PM — 3:15 PM EST

Enter Zoom